Deploying VLESS + REALITY: A Stealth Proxy Guide for Debian
Introduction
A brief guide to set up a VLESS with Reality server on a Debian-based system using binary releases.
VLESS is a stateless proxy protocol designed for high performance and low overhead. REALITY is a security layer that eliminates TLS fingerprints by “borrowing” the identity of other websites. Together, they provide superior stealth against Deep Packet Inspection (DPI).
-
Protocol (VLESS): Handles user authentication (UUID) and data routing. It does not encrypt data on its own.
-
Security (REALITY): Wraps VLESS in a modified TLS 1.3 tunnel. It mimics a handshake with any website to bypass active probing and fingerprinting.
Steps
1. Pre-requisites
Download and extract the Xray-core binary, and move it to /usr/local/bin for system-wide access:
wget https://github.com/XTLS/Xray-core/releases/latest/download/Xray-linux-64.zip
sudo apt install unzip
unzip ./Xray-linux-64.zip
sudo cp xray /usr/local/bin/
2. Server Configuration
Create a config file for the Xray server under directory /etc/ with the following content (modify as needed):
-
Example:
sudo mkdir -p /etc/xray sudo vim /etc/xray/config.json{ "log": { // Path and level of the logs, can be anywhere "access": "/var/log/xray/access.log", "error": "/var/log/xray/error.log", "loglevel": "warning" }, "inbounds": [ { "tag": "reality-inbound", "listen": "0.0.0.0", "port": 443, "protocol": "vless", "settings": { "clients": [ { "id": "<YOUR_UUID_HERE>", "flow": "xtls-rprx-vision" } ], "decryption": "none", // Enable UDP relay "udp": true }, "streamSettings": { "network": "tcp", "security": "reality", "realitySettings": { "sockopt": { // Enable TCP Fast Open "tcpFastOpen": true }, "show": false, "dest": "www.example.com:443", "serverNames": ["example.com", "www.example.com"], "privateKey": "<YOUR_PRIVATE_KEY_HERE>", "shortIds": [ "<YOUR_SHORT_ID_1>", "<YOUR_SHORT_ID_2>", "<YOUR_SHORT_ID_3>" ] } } } ], "outbounds": [ { "tag": "direct", "protocol": "freedom", "settings": {} }, { "protocol": "blackhole", "tag": "blocked", "settings": {} } ], "routing": { "domainStrategy": "AsIs", "rules": [ { "type": "field", "ip": [ "127.0.0.0/8", "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16" ], "outboundTag": "blocked" } ] } }- Generate UUID:
xray uuid- Role: This acts as your “User ID” for the VLESS protocol.
- Generate Reality Key Pair:
xray x25519- Role: This generates a private key and public key for the REALITY protocol. The private key stays on the server, and the public key goes to the client.
- Generate Short IDs (16-character hex string):
openssl rand -hex 8- Role: Used by REALITY to validate legitimate clients quickly. Generate a 8-byte hex string since each byte = 2 hex characters.
- Generate UUID:
3. Set up as a systemd service
Create a systemd service file for the Xray server:
sudo vim /etc/systemd/system/xray.service
Add the following content:
[Unit]
Description=Xray Service
After=network.target nss-lookup.target
[Service]
User=root
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
NoNewPrivileges=true
ExecStartPre=/usr/bin/mkdir -p /var/log/xray
ExecStart=/usr/local/bin/xray run -c /etc/xray/config.json
# Using SIGUSR1 to reopen logs is cleaner than a full reload
ExecReload=/usr/bin/kill -HUP $MAINPID
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
Reload systemd to recognize the new service, then start and enable it to run on boot:
# Re-scan the services under `/etc/systemd/system/`
sudo systemctl daemon-reload
# Start and enable the service
sudo systemctl enable xray && sudo systemctl start xray
# Or combine the last two commands as:
sudo systemctl enable --now xray
# Check the service status
sudo systemctl status xray
# Restart the service if config changes
sudo systemctl daemon-reload && sudo systemctl restart xray
Service Management Comparison
| Action | Command | Signal | Impact on Connections | Use Case |
|---|---|---|---|---|
| Restart | systemctl restart xray |
SIGTERM |
High: Drops all active sessions and restarts the process. | Major configuration changes or binary updates. |
| Reload | systemctl reload xray |
SIGHUP |
Medium: Re-reads config.json. May cause brief interruptions. |
Updating User UUIDs or modifying routing rules. |
| Log Reopen | kill -s SIGUSR1 $PID |
SIGUSR1 |
None: Only closes and reopens log file handles. | Logrotate: Use this to rotate logs without downtime. |
4. (Optional) Rotate the log file periodically using logrotate:
Same as the setup for Shadowsocks See Logrotate section
5. Verification
The DNS “0ms” Phenomenon (Fake IP)
If using TUN Mode on the client, running dig might show a 0 msec response time and a IP like 198.18.0.x.
This is the Fake IP mechanism, where client intercepts the DNS query and answers instantly from memory to prevent leaks.
UDP Relay
It allows the server to handle non-TCP traffic (like gaming or VoIP) by encapsulating it within the established TCP tunnel between the xray server and client. While the communication between the server and the final destination uses standard UDP, the client-to-server link is hidden within a TCP stream to avoid being throttled or blocked by ISPs.
- Testing UDP Forwarding
To verify that your VLESS/Reality server is correctly handling UDP traffic, you can use the dig command (part of the dnsutils package) to force a DNS query over UDP through your proxy.-
Server side:
# Replace 'ens3' with your interface and '1.2.3.4' with your local public IP sudo tcpdump -i ens3 -n udp port 53 and host 1.2.3.4 # Interface Name: Confirmed via ip link show -
Client side:
dig @8.8.8.8 www.google.comSuccess Criteria:
If you see packets arriving at and leaving your VPS on port 53, your UDP-over-TCP relay is perfectly configured.
-
TCP Fast Open (TFO)
Since REALITY mimics a TLS 1.3 handshake, reducing the number of round trips substantially benefits the performance.
- Standard TLS: Requires multiple back-and-forth packets before data is sent.
- TFO + REALITY: Allows the first VLESS authentication data to be sent alongside the initial SYN packet, significantly reducing the “Time to First Byte” (TTFB).
While “tcpFastOpen”: true is set in the Xray config, it won’t take effect unless the Linux kernel on your server is configured to allow it.
-
Check current TFO status
cat /proc/sys/net/ipv4/tcp_fastopen # 0: Disabled # 1: Enabled (Client only) # 2: Enabled (Server only) # 3: Enabled (Both Client and Server) -
Enable TFO Permanently
To enable TFO (Server & Client) and ensure it persists after a reboot, add the parameter to the
sysctlconfiguration.sudo vim /etc/sysctl.confAdd this line at the end:
net.ipv4.tcp_fastopen = 3Apply the changes immediately:
sudo sysctl -p -
Verify if TFO is active
grep TcpFastOpen /proc/net/netstator
nstat -az TcpFastOpenPassiveor listening on the server
# Replace 1.2.3.4 as your Public IP sudo tcpdump -i ens3 -n "src host 1.2.3.4 and tcp[tcpflags] & tcp-syn != 0"If TFO is working correctly, we should eventually see length > 0 on a packet with the [S] (SYN) flag.
Flags [S]: The connection is still in the “Start” (SYN) phase.
6. Optimizations
Modify the TCP settings to better handle high-latency connections.
sudo vim /etc/sysctl.conf
Add the following content:
# Enable Fair Queuing (FQ) for better latency and compatibility with BBR
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
net.ipv4.tcp_fastopen = 3
# Larger buffers to accommodate high bandwidth-delay product (BDP) of long-distance connections
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 131072 16777216
# Enable TCP Window Scaling to allow windows larger than 64KB, essential for high-latency connections
net.ipv4.tcp_window_scaling = 1
# Enable Selective ACK and Forward ACK for better loss recovery
net.ipv4.tcp_sack = 1
net.ipv4.tcp_fack = 1
# Reduce SYN retries to fail faster in case of connectivity issues
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_synack_retries = 3
TCP BBR Congestion Control
BBR is a modern congestion control algorithm that can significantly improve throughput and reduce latency, especially on high-bandwidth or long-distance connections.
| Parameter | Value | Default | Logic |
|---|---|---|---|
net.core.default_qdisc |
fq |
pfifo_fast |
Fair Queuing is mandatory for BBR. It paces packets to prevent bursts that trigger ISP-level dropping. |
net.ipv4.tcp_congestion_control |
bbr |
cubic |
Switches from loss-based to model-based control. BBR maintains high throughput even when random packet loss occurs. |
net.ipv4.tcp_fastopen |
3 |
1 |
Enables TCP Fast Open (TFO) for both listener and connector. Reduces one RTT during the handshake phase. |
Memory Buffer Tuning
High latency requires larger buffers to keep unacknowledged data in flight.
$BDP = R \times RTT$
- BDP (Bandwidth-Delay Product) represents the amount of data that can be “in the pipe” at any given time without being acknowledged. For long-distance connections, this can easily exceed the default buffer sizes, leading to underutilization of the available bandwidth.
- R (Bandwidth) is the maximum data transfer rate of the connection (e.g., 100 Mbps = 12.5 MB/s).
- RTT (Round Trip Time) is the time it takes for a packet to go from the sender to the receiver and back (e.g., 100 ms).
| Parameter | Value | Typical Default | Logic |
|---|---|---|---|
net.core.rmem_max |
16777216 (16MB) |
212992 (208KB) |
Raises the hard limit (16MB) for the receive window to handle high-speed bursts. |
net.core.wmem_max |
16777216 (16MB) |
212992 (208KB) |
Raises the hard limit (16MB) for the send window, critical for the high-delay path. |
net.ipv4.tcp_rmem |
4096 87380 16777216 (4KB 85.33KB 16MB) |
4096 131072 6291456 (4KB 128KB 6MB) |
Keeps the default (mid) conservative to save RAM, letting BBR scale up dynamically. |
net.ipv4.tcp_wmem |
4096 131072 16777216 (4KB 128KB 16MB) |
4096 16384 4194304 (4KB 16KB 4MB) |
Increase the default (mid) to ensure the “pipe” is filled immediately upon connection. |
Why 87380 (85.33KB) as the receive buffer default?
The number 87380 is the historical Linux kernel default for tcp_rmem (receive buffer), which is derived from the Ethernet MTU (Maximum Transmission Unit, L2/L3) and the TCP Window Scaling logic:
- Ethernet Standard: A standard Ethernet frame is 1500 bytes.
- TCP Payload: After subtracting IP and TCP headers, the actual data MSS (Maximum Segment Size, L4) is typically 1460 bytes. Consequently, $1460 \times 60 \approx 87600$.
- Legacy Alignment: In older Linux versions, this was tuned to fit exactly 60 packets into the initial receive window. Over time, 87380 became the hardcoded “optimal” starting point for a balance between memory usage and throughput for standard 10/100 Mbps links.
Packet Loss Resilience
| Parameter | Value | Default | Logic |
|---|---|---|---|
net.ipv4.tcp_window_scaling |
1 |
1 |
Allows TCP windows to exceed 64KB. Essential for modern high-latency broadband. |
net.ipv4.tcp_sack |
1 |
1 |
Selective ACK: Allows the receiver to tell the sender exactly which segments are missing, avoiding full window retransmission. |
net.ipv4.tcp_fack |
1 |
1 |
Forward ACK: Works with SACK to better estimate the number of packets in flight during loss. |
Handshake & Timeout Optimizations
| Parameter | Value | Default | Logic |
|---|---|---|---|
net.ipv4.tcp_syn_retries |
3 |
6 |
Reduces the “hang” time for failed outbound connections. Fails faster (~15s vs ~63s) to allow proxy failover. |
net.ipv4.tcp_synack_retries |
3 |
5 |
Limits the server’s attempts to respond to clients in high-loss environments, preventing SYN_RECV backlog bloat. |
References
- Gemini 3
- README - Xray-core: https://github.com/XTLS/Xray-core
- README - Reality: https://github.com/XTLS/REALITY