Rust’s async ecosystem, powered by Tokio, enables you to build networked services that handle millions of concurrent connections with minimal resource usage. Let’s explore how to build high-performance async applications.

Why Async Rust?

Traditional threaded servers create one OS thread per connection. With 10,000 concurrent connections, you need 10,000 threads—each consuming memory for its stack and causing context-switching overhead.

Async Rust uses cooperative multitasking: thousands of tasks share a small thread pool. Each task yields when waiting for I/O, allowing other tasks to run.

Setting Up Tokio

Add to your Cargo.toml:

1
2
[dependencies]
tokio = { version = "1", features = ["full"] }

Basic async main:

1
2
3
4
#[tokio::main]
async fn main() {
    println!("Hello from async Rust!");
}

Building a TCP Echo Server

Let’s build a server that echoes messages back to clients:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
use tokio::net::{TcpListener, TcpStream};
use tokio::io::{AsyncReadExt, AsyncWriteExt};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let listener = TcpListener::bind("127.0.0.1:8080").await?;
    println!("Server listening on port 8080");

    loop {
        let (socket, addr) = listener.accept().await?;
        println!("New connection from {}", addr);
        
        // Spawn a new task for each connection
        tokio::spawn(async move {
            if let Err(e) = handle_connection(socket).await {
                eprintln!("Error handling {}: {}", addr, e);
            }
        });
    }
}

async fn handle_connection(mut socket: TcpStream) -> Result<(), Box<dyn std::error::Error>> {
    let mut buffer = [0u8; 1024];
    
    loop {
        let n = socket.read(&mut buffer).await?;
        
        if n == 0 {
            // Connection closed
            return Ok(());
        }
        
        socket.write_all(&buffer[..n]).await?;
    }
}

Concurrent Operations with join!

Execute multiple operations concurrently:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
use tokio::time::{sleep, Duration};

async fn fetch_user(id: u64) -> User {
    // Simulate API call
    sleep(Duration::from_millis(100)).await;
    User { id, name: format!("User {}", id) }
}

async fn fetch_orders(user_id: u64) -> Vec<Order> {
    sleep(Duration::from_millis(150)).await;
    vec![Order { id: 1, user_id }]
}

async fn get_user_with_orders(user_id: u64) -> (User, Vec<Order>) {
    // Both operations run concurrently
    let (user, orders) = tokio::join!(
        fetch_user(user_id),
        fetch_orders(user_id)
    );
    
    (user, orders)
}

Timeouts and Error Handling

Prevent tasks from hanging forever:

1
2
3
4
5
6
7
8
9
use tokio::time::{timeout, Duration};

async fn fetch_with_timeout(url: &str) -> Result<Response, Error> {
    match timeout(Duration::from_secs(30), fetch(url)).await {
        Ok(Ok(response)) => Ok(response),
        Ok(Err(e)) => Err(Error::FetchError(e)),
        Err(_) => Err(Error::Timeout),
    }
}

Channel-Based Communication

Use channels for communication between tasks:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
use tokio::sync::mpsc;

#[derive(Debug)]
enum Command {
    Get { key: String, resp: tokio::sync::oneshot::Sender<Option<String>> },
    Set { key: String, value: String },
}

#[tokio::main]
async fn main() {
    let (tx, mut rx) = mpsc::channel::<Command>(32);
    
    // Database manager task
    let manager = tokio::spawn(async move {
        let mut db = std::collections::HashMap::new();
        
        while let Some(cmd) = rx.recv().await {
            match cmd {
                Command::Get { key, resp } => {
                    let value = db.get(&key).cloned();
                    let _ = resp.send(value);
                }
                Command::Set { key, value } => {
                    db.insert(key, value);
                }
            }
        }
    });
    
    // Client tasks
    let tx1 = tx.clone();
    tokio::spawn(async move {
        tx1.send(Command::Set {
            key: "hello".to_string(),
            value: "world".to_string(),
        }).await.unwrap();
    });
    
    let tx2 = tx.clone();
    tokio::spawn(async move {
        let (resp_tx, resp_rx) = tokio::sync::oneshot::channel();
        tx2.send(Command::Get {
            key: "hello".to_string(),
            resp: resp_tx,
        }).await.unwrap();
        
        if let Ok(value) = resp_rx.await {
            println!("Got value: {:?}", value);
        }
    });
    
    // Wait a bit then shutdown
    tokio::time::sleep(Duration::from_secs(1)).await;
    drop(tx);
    manager.await.unwrap();
}

Graceful Shutdown

Handle SIGTERM and SIGINT properly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
use tokio::signal;
use tokio::sync::broadcast;

#[tokio::main]
async fn main() {
    let (shutdown_tx, _) = broadcast::channel::<()>(1);
    
    // Start server
    let server_shutdown = shutdown_tx.subscribe();
    let server = tokio::spawn(run_server(server_shutdown));
    
    // Wait for shutdown signal
    signal::ctrl_c().await.expect("Failed to listen for ctrl-c");
    println!("Shutdown signal received");
    
    // Notify all tasks
    let _ = shutdown_tx.send(());
    
    // Wait for server to finish
    server.await.unwrap();
    println!("Server shut down gracefully");
}

async fn run_server(mut shutdown: broadcast::Receiver<()>) {
    let listener = TcpListener::bind("127.0.0.1:8080").await.unwrap();
    
    loop {
        tokio::select! {
            result = listener.accept() => {
                let (socket, _) = result.unwrap();
                tokio::spawn(handle_connection(socket));
            }
            _ = shutdown.recv() => {
                println!("Server shutting down...");
                break;
            }
        }
    }
}

Performance Tuning

Worker Threads

Configure the runtime for your workload:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
#[tokio::main(flavor = "multi_thread", worker_threads = 4)]
async fn main() {
    // Uses 4 worker threads
}

// Or for I/O-bound work with many concurrent tasks:
#[tokio::main(flavor = "multi_thread")]
async fn main() {
    // Uses num_cpus threads by default
}

Buffered I/O

Use buffered readers/writers for small reads:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use tokio::io::{BufReader, BufWriter, AsyncBufReadExt};

async fn handle_lines(stream: TcpStream) {
    let (reader, writer) = stream.into_split();
    let mut reader = BufReader::new(reader);
    let mut writer = BufWriter::new(writer);
    
    let mut line = String::new();
    while reader.read_line(&mut line).await.unwrap() > 0 {
        writer.write_all(line.as_bytes()).await.unwrap();
        writer.flush().await.unwrap();
        line.clear();
    }
}

Connection Pooling

Reuse connections for repeated requests:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
use deadpool_postgres::{Config, Pool, Runtime};

async fn create_pool() -> Pool {
    let mut cfg = Config::new();
    cfg.host = Some("localhost".to_string());
    cfg.dbname = Some("mydb".to_string());
    cfg.pool = Some(deadpool_postgres::PoolConfig::new(16));
    
    cfg.create_pool(Some(Runtime::Tokio1), tokio_postgres::NoTls).unwrap()
}

async fn query_user(pool: &Pool, id: i64) -> User {
    let client = pool.get().await.unwrap();
    let row = client
        .query_one("SELECT * FROM users WHERE id = $1", &[&id])
        .await
        .unwrap();
    
    User::from_row(row)
}

Conclusion

Async Rust with Tokio provides:

  • Efficient handling of thousands of concurrent connections
  • Zero-cost async/await syntax
  • Powerful primitives for coordination
  • Production-ready performance

The learning curve is steep, but the performance payoff is substantial. At Sajima Solutions, we use async Rust to build networked services that handle extreme loads efficiently. Contact us to discuss your performance-critical systems.