textlens vs text-readability vs natural: Which npm Package for Text Analysis?
Source: Dev.to
Feature Comparison
| Feature | textlens | text‑readability | natural |
|---|---|---|---|
| Readability formulas | 8 (all major) | 7 | 0 |
| Consensus grade | Yes | No | No |
| Sentiment analysis | AFINN‑165 | No | Bayes + AFINN |
| Keyword extraction | TF‑IDF | No | TF‑IDF |
| Keyword density (n‑grams) | Unigrams, bigrams, trigrams | No | N‑grams |
| SEO scoring | Yes | No | No |
| Text summarization | Extractive | No | No |
| Reading time | Yes | No | No |
| Tokenization | Basic (English) | Basic | Advanced (multilingual) |
| Stemming | No | No | Porter, Lancaster, etc. |
| Classification | No | No | Naive Bayes, logistic regression |
| Phonetics | No | No | Soundex, Metaphone, etc. |
| Dependencies | 0 | 0 | 1 (webworker‑threads, optional) |
| TypeScript | Native | No (DefinitelyTyped) | No (DefinitelyTyped) |
| CLI | Yes | No | No |
| Bundle size (min) | ~45 KB | ~8 KB | ~2 MB+ |
When to Use Each Package
Use textlens when
You need content analysis — readability scoring, sentiment detection, keyword extraction, SEO scoring — and you want it all from one package with one API.
const { analyze } = require('textlens');
const result = analyze(articleText);
console.log(result.readability.consensusGrade); // 7.2
console.log(result.sentiment.label); // 'positive'
console.log(result.keywords[0].word); // 'javascript'
console.log(result.readingTime.minutes); // 4
Best for: blog‑post quality checks, content pipelines, documentation linting, SEO tools, content‑management systems.
Use text‑readability when
You only need readability scores and nothing else. It’s smaller (~8 KB vs ~45 KB) and does one thing.
const rs = require('text-readability');
const grade = rs.fleschKincaidGrade(text);
const ease = rs.fleschReadingEase(text);
Best for: simple readability gates where bundle size matters and you don’t need sentiment, keywords, or other analysis.
Trade‑off: No consensus grade (you pick one formula), no TypeScript types shipped with the package, no CLI.
Use natural when
You need NLP capabilities beyond text analysis — tokenization with stemmers, Naive Bayes classification, phonetic matching, string‑distance algorithms, or multilingual support.
const natural = require('natural');
const tokenizer = new natural.WordTokenizer();
const classifier = new natural.BayesClassifier();
classifier.addDocument('great product', 'positive');
classifier.addDocument('terrible service', 'negative');
classifier.train();
console.log(classifier.classify('good experience'));
Best for: chatbots, search engines, language‑processing pipelines, text classification, spell checking.
Trade‑off: Large bundle (~2 MB+), no readability formulas at all, no built‑in content scoring. It’s a general NLP library, not a content‑analysis tool.
Readability Formula Coverage
| Formula | textlens | text‑readability | natural |
|---|---|---|---|
| Flesch Reading Ease | Yes | Yes | No |
| Flesch‑Kincaid Grade | Yes | Yes | No |
| Gunning Fog Index | Yes | Yes | No |
| Coleman‑Liau Index | Yes | Yes | No |
| SMOG Index | Yes | Yes | No |
| Automated Readability Index | Yes | Yes | No |
| Dale‑Chall | Yes | Yes | No |
| Linsear Write | Yes | No | No |
| Consensus Grade | Yes | No | No |
textlens adds the Linsear Write formula and a consensus grade that averages all grade‑level formulas into a single number.
Sentiment Analysis Comparison
Both textlens and natural offer sentiment analysis, but with different approaches.
textlens
Uses the AFINN‑165 lexicon (~3,300 words with sentiment scores from –5 to +5). It returns a normalized score, a label, a confidence value, and lists of matched positive/negative words:
const { sentiment } = require('textlens');
const result = sentiment('The product is great but the support is terrible.');
console.log(result.label); // 'neutral' (mixed)
console.log(result.positive); // ['great']
console.log(result.negative); // ['terrible']
console.log(result.confidence); // 0.18
natural
Offers both AFINN‑based and Naive Bayes classification. The Bayes approach is trainable — you can feed it your own labeled data:
const natural = require('natural');
const Analyzer = natural.SentimentAnalyzer;
const stemmer = natural.PorterStemmer;
const analyzer = new Analyzer('English', stemmer, 'afinn');
const score = analyzer.getSentiment(['great', 'but', 'terrible']);
// Returns a single number
Bottom line: textlens gives more structured output (label, confidence, word lists) out of the box. natural gives more flexibility if you want trainable classifiers.
Keyword Extraction
Both textlens and natural support TF‑IDF keyword extraction.
textlens
const { keywords } = require('textlens');
const kw = keywords(articleText, { topN: 5 });
// [{ word: 'javascript', score: 4.2, count: 8, density: 2.1 }, ...]
natural
const natural = require('natural');
const TfIdf = natural.TfIdf;
const tfidf = new TfIdf();
tfidf.addDocument(articleText);
tfidf.listTerms(0).slice(0, 5);
textlens adds keyword density percentages and n‑gram analysis (bigrams, trigrams) via the separate density() function. natural requires more setup but provides a solid TF‑IDF implementation.
Supports Multi‑Document TF‑IDF Natively
Bundle Size and Dependencies
| Package | Install Size | Dependencies | Min Bundle |
|---|---|---|---|
| textlens | ~180 KB | 0 | ~45 KB |
| text‑readability | ~45 KB | 0 | ~8 KB |
| natural | ~10 MB+ | Several | ~2 MB+ |
If you’re building a server‑less function or a browser‑side tool, size matters.
- text‑readability is the smallest.
- textlens is a reasonable middle ground.
- natural is heavy.
TypeScript Support
textlens ships native TypeScript types — no @types package needed:
import { readability, ReadabilityResult } from 'textlens';
const result: ReadabilityResult = readability(text);
text‑readability and natural have community‑maintained types via DefinitelyTyped (@types/text-readability, @types/natural), which can lag behind releases.
The Overlap Problem
Before textlens, building a content‑analysis pipeline meant installing multiple packages:
# Old approach
npm install text-readability # readability scores
npm install sentiment # sentiment analysis
npm install keyword-extractor # keyword extraction
npm install reading-time # reading time
Four packages, four APIs, four sets of documentation.
textlens consolidates these into one package:
npm install textlens
const { analyze } = require('textlens');
const result = analyze(text);
// result.readability, result.sentiment, result.keywords, result.readingTime
Which Should You Choose?
- Choose
textlensif you want readability + sentiment + keywords + SEO scoring from a single zero‑dependency package. It covers the “content analysis” use case end‑to‑end. - Choose
text‑readabilityif you only need readability scores and want the smallest possible bundle. - Choose
naturalif you need broader NLP capabilities like classification, stemming, phonetics, or multilingual tokenization. It’s a different category of tool.
There’s no wrong answer — it depends on what you’re building. All three are MIT‑licensed and actively maintained.
Links
Disclosure: I built textlens. This comparison reflects my honest assessment, but read the other packages’ docs and decide for yourself.
This is part of the textlens series — tutorials on text analysis in JavaScript and TypeScript.