- Telegram's SpamBot is not a single bot — it's several layers of defense. Part is publicly documented, part is inferred through observation.
- The 6 main signals: session.entropy, swipe.velocity, pause.entropy, typing.cadence, fingerprint.mask, + a 6th (proprietary).
- Observation methodology: ~1,800 ban events in our client base, feature correlation, A/B tests on test accounts.
- No single signal alone triggers a ban — you need 2-4 matches. That makes detection bots harder, but also raises the importance of "closing everything".
- If you model <3 signals — you're at risk. 4-5 — relatively safe. All 6 — practically invisible.
Telegram doesn't publish the mechanics of its anti-spam. That's reasonable: publication = roadmap for bypass. But for us, as a vendor of an anti-ban solution, understanding the mechanics is critical. You can't effectively bypass a filter you don't understand.
This article is a distillate of 3 years of observation. Not from Telegram engineers (we don't have any), but from statistical analysis of real ban events in our client base plus controlled A/B tests on test accounts.
How we infer the signals
The base methodology:
- Collect ban events — every time a client's account gets banned, we log the full context: how many messages sent, over what period, with which device fingerprints, through which proxies, into which groups.
- Correlation analysis — we look for features that correlate with ban events. For example, "accounts with fingerprints from the top-3 most common get banned 2.3× more often than those with rare fingerprints".
- A/B testing on test accounts — we take 50-100 test accounts, apply different strategies, wait 30-60 days, compare ban rate. Reproducible.
- Elimination — remove one factor at a time, observe which one drops ban rate. Isolated causality.
The result — the 6 signals below. Confidence in each varies: the first 4 are high (reproduced multiple times), the 5th is medium, the 6th is hypothetical.
session.entropy
The distribution of session durations for an account. A real user: log-normal. A bot: usually uniform or fixed.
How Telegram detects it: they have session history on the server side. Computing the entropy distribution isn't computationally expensive — it can be done for millions of accounts in batch.
swipe.velocity
Chat-list scroll speed + trajectory. For real users — bezier with micro-jitter. For bots — a straight line.
How it's detected: Telegram clients (TDesktop, iOS/Android apps) emit telemetry for scroll events. It's not a public API — it's telemetry built into the apps themselves.
We can't fake scrolling through a self-written client (TDLib doesn't scroll). But we can use official clients with automation (via memory address space + synthetic input events). This is complex and fragmented, used only in premium anti-detect setups.
pause.entropy(μ, σ)
Intervals between actions. Pauses depend on context (read a long message → long pause before replying; saw a short one → fast reaction).
The key point: for bots, pauses don't correlate with context. The scheduler just waits a fixed interval. For real users — it varies.
Check: if your bot replies with the same pause to "ok" and to "tell me more about Feature X, Y, Z" — that's a signal.
typing.cadence
Telegram has a "typing indicator" — it shows "typing..." while a user is typing. The API lets you send sendChatAction(typing) — and doing it right matters.
A real user:
- Starts the typing indicator
- Types for 1-10 seconds (depending on message length)
- Pauses for backspace (rare, ~10% of messages)
- Sends
A bot typically: either doesn't use the typing indicator at all, or sends it for a fixed duration before sending. Both variants are bot signatures.
fingerprint.mask
Device fingerprint is collected at authentication and attached to the session. For MTProto: device_model, app_version, system_version, lang_code.
Analysis: Telegram can cluster sessions by fingerprint clusters. If 50 accounts share an identical fingerprint (rare in a real population of 950M users), that's a signal.
Proprietary signal (hidden)
Observation: there are bans that happen with seemingly correct first 5 signals. That means there's a 6th we haven't identified.
Hypotheses (unconfirmed):
- Network-level ML on packet metadata
- Social-graph analysis (who you write to vs who writes back)
- Cross-account correlation via shared IP / cookies
- Content understanding beyond hash (LLM-style semantic similarity)
We don't know exactly. TG:ON has mitigation strategies that empirically reduce bans, but the exact mechanics are speculative.
How to apply this
A practical signal-by-signal checklist:
- Session entropy: log-normal distribution of session durations (μ=10min, σ=0.9)
- Swipe velocity: if you use automation (TDLib), don't trigger bot-like events. If you use browser automation, model bezier curves
- Pause entropy: context-aware delays that depend on content length
- Typing cadence: send typing_indicator + type at 4-8 CPS (chars per second)
- Fingerprint mask: sample from a real-world distribution, stable per-account
- 6th signal: basic hygiene (residential IP, normal warmup) is usually sufficient
TG:ON for macOS · Windows · Linux
Desktop app, 160 MB. Runs locally, your keys stay yours. 3-day trial, no credit card.
Download for free5 signals modeled.
The 6th — soft mitigation.
In TG:ON the full anti-bot signal stack works by default. 3-day trial — observe the bouncetail on your own accounts.
Start trial