⎯ TL;DR
  • "Template" outreach — conversion 2-5% (response rate). That's the industry baseline for cold outreach.
  • Three components deliver ×5-10 on that number: Spintax (2× deliverability), LLM personalization (2-3× opens), human-like cadence (1.5-2× reply rate).
  • Effects don't add — they multiply. 2 × 2.5 × 1.7 ≈ 8-9×.
  • Source: internal TG:ON analytics on 2.4M messages sent in Q4 2025. Variation by niche: e-com higher, B2B lower.

When you see an ad creative claiming "47% conversion on Telegram outreach" — it's most likely either out of context (response rate on a specific segment) or marketing stretch. The real picture is more nuanced: 47% is the upper bound, achievable only with the right stack in the right niche.

This breakdown isn't about "magic" 47%. It's about which specific mechanisms lift conversion from base 2% to "decent" 20-30%, and how 47% is reached under optimal conditions.

01 · Baseline

What "2% conversion" actually means — which metric

First source of confusion: everyone says "conversion" but means different things. Cold outreach typically tracks 4 metrics:

MetricWhat it countsTypical value
Delivery rateMessage reached recipient (not in spam/block)70-95%
Open rateRecipient opened the chat (read)40-80%
Response rateRecipient replied2-15%
Conversion rateRecipient became a lead/customer0.5-5%

In "2% → 47%" we're talking about response rate. It's the most honest metric for evaluating outreach quality — it doesn't count whether the person became a customer (offer plays a role there), but counts whether your approach got a reply.

Industry benchmark for cold Telegram DM in 2025 — 3-5% response rate. Source: aggregated reports from cold-outreach services (Lemlist, Instantly, Reply.io for email; Telegram specifics are usually worse than email due to higher noise).

02 · Spintax

Component 1: Spintax randomization (+2× deliverability)

Spintax (spinning syntax) — a technique where variants inside a template are chosen randomly for each send. Example:

# Template with spintax variants "{Hi|Hello|Good day}, {name}! {Saw|Noticed} you in {the channel|the chat} "{source}", {wanted|decided} to {ask|check} about {your experience|your projects}." # 10,000 sends = ~9,800 unique variants # Telegram content hash: each message unique → not grouped into spam clusters

Why it works: Telegram groups identical messages by text hash. If 100 accounts send identical text — SpamBot sees a cluster and applies group-wide sanction (usually FLOOD_WAIT + potential PEER_FLOOD after 2-3 hours). Spintax breaks this grouping at the sender level.

Empirical numbers from our 2.4M sends:

43%
delivery · no spintax
1 account, 500 msg
87%
delivery · spintax
same parameters
2.0×
lift · deliverability
from spintax alone

Practical rule: your template needs at least 4-5 variable blocks, each with 3+ options. That gives ~240+ unique combinations, which is enough for any realistic batch (typically 50-500 msg/batch).

03 · LLM Personalization

Component 2: LLM personalization (+2-3× opens and replies)

Spintax randomizes phrasing, but the content stays the same. The next level — LLM rewrites the message for recipient context.

What "recipient context" means:

Basic LLM prompt template (simplified):

prompt = f""" You're a friend-marketer writing a short Telegram message. To: {recipient_name} Found in: channel '{channel_name}' ({channel_category}) Last discussed post: "{last_post_snippet}" Goal: {my_offer_summary} Write a 2-3 sentence message. Mention the channel, hook into the post. DON'T start with "Hello" or "Hi, {name}!". DON'T use emoji. Tone: {tone_formal_or_casual}. """

Important nuance: the LLM doesn't generate every message from scratch — too expensive ($0.002 × 10K = $20 per batch, and randomness too high). In practice, a hybrid works: 20-30 LLM variants pre-generated, combined with spintax on the fly.

Effect on metrics:

ApproachOpen rateResponse rate
Template, no personalization42%2.8%
+ spintax48%5.1%
+ name + channel (substitution)61%9.4%
+ LLM rewrite for context74%18.2%
+ human-like cadence (see below)81%32.7%
+ niche optimization (upper bound)89%47.1%

Yes, 47% is not a starting number — it's the ceiling with full optimization in a favorable niche. Most clients see 20-35%, which is still ×6-10 over baseline.

«Spintax handles "don't land in spam". LLM handles "be interesting to read". Cadence handles "don't feel robotic".» ⎯ three layers of personalization
04 · Cadence

Component 3: Human-like cadence (+1.5-2× reply rate)

Third layer — not text, but how the text arrives to the recipient. Mass senders usually ignore this aspect because it's not obvious in the UI.

What counts as cadence:

  1. Typing indicator: show "typing..." for 1-4 seconds before sending. Small thing, but recipient sees it.
  2. Delays between one account's messages: not burst, but distribution. Norm — log-normal with mean ~90 seconds and variance 60-300 seconds.
  3. Different times of day: not all 500 messages at 14:00 Monday. Spread across 3-4 hourly windows with breaks.
  4. Reaction to "read": if recipient read and didn't reply — don't hammer with "?" after 10 minutes. If unsubscribed — don't add to follow-up.
# Delay function mimicking a human import numpy as np def human_delay(base_seconds=90): # Log-normal distribution: median ~ base_seconds, # but rare long pauses up to 5-10 minutes (human got distracted) return np.random.lognormal(mean=np.log(base_seconds), sigma=0.7) # Every 7-10th send — extended pause ("lunch", "coffee") if msg_number % randint(7, 10) == 0: extended_pause(rand(300, 1800)) # 5-30 min

Why cadence affects response rate (not just delivery): the recipient subconsciously feels the rhythm. "Typing..." before a message, a 2-minute pause after reading before your reply — all of this creates the sense of a live conversation. In controlled tests on our data, cadence adds +40-60% to response rate vs "instant" sending.

05 · Multiplicative effect

Why the effects are multiplicative, not additive

Key observation from the data: components don't add up. They're multiplying coefficients in the funnel.

2.0×
Spintax · delivery
2.5×
LLM · quality
1.7×
Cadence · reply
1.5×
Niche fit · ceiling

2.0 × 2.5 × 1.7 × 1.5 = ~12.75×. From baseline 2% that gives 25.5%. From 3% — 38%. From 3.7% — 47%.

Why multiplicative: each component operates at its own funnel layer:

If any component is zero — the whole funnel drops. 100% deliverability × 0% opens = 0% response. That's why "added spintax, conversion stayed the same" is a common complaint: one component without the others does little.

06 · Implementation

How it's assembled in TG:ON

The whole funnel is configured in the UI without code:

  1. Template with spintax — editor with live preview, shows a specific combination example.
  2. LLM rewriting — connect your API key (OpenAI, Anthropic, DeepSeek, Groq, Gemini) or use the built-in TG:ON pool. Prompt is configured for niche and tone.
  3. Cadence profiles — "conservative" (medium delay 120s), "aggressive" (60s), "overnight" (450s). Choose or customize manually.
  4. Content context — automatically picked up from the channel where the lead was found (if scraped through TG:ON).

All telemetry per message — delivery, opens, replies, block/spam-report — goes to the dashboard, you can A/B test templates.

⎯ download

TG:ON for macOS · Windows · Linux

Desktop app, 160 MB. Runs locally, your keys stay yours. 3-day trial, no credit card.

Download for free
⎯ try it

Spintax + LLM + Cadence.
Out of the box.

3 days Pro tier free. 25 accounts, all modules. Run your offer on 500-1000 messages and see real numbers.

Start trial
07 · Honest caveats

Where this isn't enough

Honest: 47% doesn't work everywhere and always. Conditions where conversion will be lower:

The TG:ON stack solves the problem of technical delivery and personalization. It doesn't solve: bad offer, bad targeting, bad continuation after first reply.