textlens vs text-readability vs natural: Which npm Package for Text Analysis?

Published: (March 8, 2026 at 06:00 PM EDT)
6 min read
Source: Dev.to

Source: Dev.to

Feature Comparison

Featuretextlenstext‑readabilitynatural
Readability formulas8 (all major)70
Consensus gradeYesNoNo
Sentiment analysisAFINN‑165NoBayes + AFINN
Keyword extractionTF‑IDFNoTF‑IDF
Keyword density (n‑grams)Unigrams, bigrams, trigramsNoN‑grams
SEO scoringYesNoNo
Text summarizationExtractiveNoNo
Reading timeYesNoNo
TokenizationBasic (English)BasicAdvanced (multilingual)
StemmingNoNoPorter, Lancaster, etc.
ClassificationNoNoNaive Bayes, logistic regression
PhoneticsNoNoSoundex, Metaphone, etc.
Dependencies001 (webworker‑threads, optional)
TypeScriptNativeNo (DefinitelyTyped)No (DefinitelyTyped)
CLIYesNoNo
Bundle size (min)~45 KB~8 KB~2 MB+

When to Use Each Package

Use textlens when

You need content analysis — readability scoring, sentiment detection, keyword extraction, SEO scoring — and you want it all from one package with one API.

const { analyze } = require('textlens');

const result = analyze(articleText);
console.log(result.readability.consensusGrade); // 7.2
console.log(result.sentiment.label);            // 'positive'
console.log(result.keywords[0].word);           // 'javascript'
console.log(result.readingTime.minutes);       // 4

Best for: blog‑post quality checks, content pipelines, documentation linting, SEO tools, content‑management systems.

Use text‑readability when

You only need readability scores and nothing else. It’s smaller (~8 KB vs ~45 KB) and does one thing.

const rs = require('text-readability');

const grade = rs.fleschKincaidGrade(text);
const ease  = rs.fleschReadingEase(text);

Best for: simple readability gates where bundle size matters and you don’t need sentiment, keywords, or other analysis.

Trade‑off: No consensus grade (you pick one formula), no TypeScript types shipped with the package, no CLI.

Use natural when

You need NLP capabilities beyond text analysis — tokenization with stemmers, Naive Bayes classification, phonetic matching, string‑distance algorithms, or multilingual support.

const natural = require('natural');

const tokenizer = new natural.WordTokenizer();
const classifier = new natural.BayesClassifier();

classifier.addDocument('great product', 'positive');
classifier.addDocument('terrible service', 'negative');
classifier.train();

console.log(classifier.classify('good experience'));

Best for: chatbots, search engines, language‑processing pipelines, text classification, spell checking.

Trade‑off: Large bundle (~2 MB+), no readability formulas at all, no built‑in content scoring. It’s a general NLP library, not a content‑analysis tool.

Readability Formula Coverage

Formulatextlenstext‑readabilitynatural
Flesch Reading EaseYesYesNo
Flesch‑Kincaid GradeYesYesNo
Gunning Fog IndexYesYesNo
Coleman‑Liau IndexYesYesNo
SMOG IndexYesYesNo
Automated Readability IndexYesYesNo
Dale‑ChallYesYesNo
Linsear WriteYesNoNo
Consensus GradeYesNoNo

textlens adds the Linsear Write formula and a consensus grade that averages all grade‑level formulas into a single number.

Sentiment Analysis Comparison

Both textlens and natural offer sentiment analysis, but with different approaches.

textlens

Uses the AFINN‑165 lexicon (~3,300 words with sentiment scores from –5 to +5). It returns a normalized score, a label, a confidence value, and lists of matched positive/negative words:

const { sentiment } = require('textlens');

const result = sentiment('The product is great but the support is terrible.');
console.log(result.label);      // 'neutral' (mixed)
console.log(result.positive);   // ['great']
console.log(result.negative);   // ['terrible']
console.log(result.confidence); // 0.18

natural

Offers both AFINN‑based and Naive Bayes classification. The Bayes approach is trainable — you can feed it your own labeled data:

const natural = require('natural');
const Analyzer = natural.SentimentAnalyzer;
const stemmer = natural.PorterStemmer;
const analyzer = new Analyzer('English', stemmer, 'afinn');

const score = analyzer.getSentiment(['great', 'but', 'terrible']);
// Returns a single number

Bottom line: textlens gives more structured output (label, confidence, word lists) out of the box. natural gives more flexibility if you want trainable classifiers.

Keyword Extraction

Both textlens and natural support TF‑IDF keyword extraction.

textlens

const { keywords } = require('textlens');

const kw = keywords(articleText, { topN: 5 });
// [{ word: 'javascript', score: 4.2, count: 8, density: 2.1 }, ...]

natural

const natural = require('natural');
const TfIdf = natural.TfIdf;
const tfidf = new TfIdf();

tfidf.addDocument(articleText);
tfidf.listTerms(0).slice(0, 5);

textlens adds keyword density percentages and n‑gram analysis (bigrams, trigrams) via the separate density() function. natural requires more setup but provides a solid TF‑IDF implementation.

Supports Multi‑Document TF‑IDF Natively

Bundle Size and Dependencies

PackageInstall SizeDependenciesMin Bundle
textlens~180 KB0~45 KB
text‑readability~45 KB0~8 KB
natural~10 MB+Several~2 MB+

If you’re building a server‑less function or a browser‑side tool, size matters.

  • text‑readability is the smallest.
  • textlens is a reasonable middle ground.
  • natural is heavy.

TypeScript Support

textlens ships native TypeScript types — no @types package needed:

import { readability, ReadabilityResult } from 'textlens';

const result: ReadabilityResult = readability(text);

text‑readability and natural have community‑maintained types via DefinitelyTyped (@types/text-readability, @types/natural), which can lag behind releases.

The Overlap Problem

Before textlens, building a content‑analysis pipeline meant installing multiple packages:

# Old approach
npm install text-readability   # readability scores
npm install sentiment          # sentiment analysis
npm install keyword-extractor  # keyword extraction
npm install reading-time       # reading time

Four packages, four APIs, four sets of documentation.

textlens consolidates these into one package:

npm install textlens
const { analyze } = require('textlens');

const result = analyze(text);
// result.readability, result.sentiment, result.keywords, result.readingTime

Which Should You Choose?

  • Choose textlens if you want readability + sentiment + keywords + SEO scoring from a single zero‑dependency package. It covers the “content analysis” use case end‑to‑end.
  • Choose text‑readability if you only need readability scores and want the smallest possible bundle.
  • Choose natural if you need broader NLP capabilities like classification, stemming, phonetics, or multilingual tokenization. It’s a different category of tool.

There’s no wrong answer — it depends on what you’re building. All three are MIT‑licensed and actively maintained.

Disclosure: I built textlens. This comparison reflects my honest assessment, but read the other packages’ docs and decide for yourself.

This is part of the textlens series — tutorials on text analysis in JavaScript and TypeScript.

0 views
Back to Blog

Related posts

Read more »