Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use cpm(characters per minute) for more accurate result #54

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

jcha0713
Copy link

@jcha0713 jcha0713 commented Dec 10, 2022

Motivation

The library treats each CJK character as a separate word. However, unlike Chinese and Japanese, Korean characters should not be treated as words as single character is often meaningless but used to form a word. For this reason, the result does not seem very accurate when computing reading time for Korean text.

I'm not an expert in Chinese nor Japanese but this might also be true for both languages and therefore there is a possibility that this library is giving wrong results for those languages as well.

As a solution, I suggest counting all CJK characters as individual characters (rather than as words) and using cpm (characters per minutes) for more accurate results. This way, we can count CJK characters and latin words separately and have two reading time values that we can simply add up.

Major changes

In this PR, I made several changes and added more test cases to ensure everything is working fine.

  1. First I changed the WordCountStats type to have two variables: words and chars instead of total. Then I replaced words in ReadingTimeResult type with counts object to group words and chars together. I also changed Options type to take optional charactersPerMinute value. The default for cpm is 500. ref: medium

  2. As mentioned above, it now calculates two different reading time values for CJK characters and non-CJK words and adds the numbers together to get minutes.

  3. I fixed a bug which was occurring when the first character of text is a punctuation.

  4. I introduced some new variables to improve the readability of code.

Another proposal

Currently, the countWords is handling links as one-word texts. For example, https://google.com or [google](https://google.com) would be treated as one word. However, I believe we should count these as multiple-word texts as it's more natural to read the link word by word. So I changed the logic to count all the words that are in the link and also altered the test cases accordingly. Please Let me know if you have any concerns with this.

I believe this PR would help CJK users to have very accurate reading time estimation. In fact, when I tested this with my blog post which was written in Korean, it gave me 13 minutes which is pretty accurate. (previously it was 28 min 😓)

- `countWords` ignores non-word characters at the beginning and end of a
  paragraph
- `isWordOrChar` function is declared for better readability
The main logic to count the words remains the same:
if a non-word bound is followed by a non-CJK character, it counts it as a word.

However, if a character is a CJK character, it counts it as a char instead of word.
The main problem with the previous logic was that unlike Chinese and
Japanese, a single Korean character is not a word, and since it was counted
as a word, the reading time was significantly differ from the actual
time.

This commit suggests a change to the latin word counting logic as well.
Previously, a link was counted as a single word. I suggest counting all
the words that consist a link. For example, `https://google.com` is
now a three-word-paragraph. Hopefully, this change will improve the
accuracy of result.
The reading time of CJK text is now calcuated based on cpm(character
per minute) and added to nonCJK reading time for better multi-language
support. This changes the structure of output object.
@jcha0713
Copy link
Author

jcha0713 commented Jan 1, 2023

Hello, @ngryman. It'a new year, so I just wanted to follow up on this pull request. I was wondering if there is any chance that it could be reviewed in the near future? I'm excited to contribute to the project and would really appreciate any feedback. Thank you for your time and consideration. I look forward to your response.

@abiriadev
Copy link

As a solution, I suggest counting all CJK characters as individual characters (rather than as words) and using cpm (characters per minutes) for more accurate results.

@jcha0713 isn't it better to use WPM for Korean and use CPM for only Japanese and Chinese?

Anyway, It's a shame this PR hasn't been merged yet.

@macx
Copy link

macx commented Apr 17, 2024

@ngryman Project dead? Please review (and merge) this MR.

@Josh-Cena
Copy link
Collaborator

Hi: I'm co-maintaining. I'm not sure if @ngryman has time to review at all. I think it's very hard to gauge what a "word" means and whether reading speed can really be accurately measured by either "words" or "characters". Even in Chinese, I would say two-character words can be read faster than two words each of a single character. If anything, we should use Intl.Segmenter instead, which separates words by semantics, not by their string forms, which would easily solve the Korean problem. It has support from Node 16 and most browsers (except Firefox, unfortunately). This PR already contains a breaking change. Why don't we stop iterating upon this fundamentally broken counting algorithm and use something more robust?

@abiriadev
Copy link

abiriadev commented Apr 18, 2024

@Josh-Cena

Why don't we stop iterating upon this fundamentally broken counting algorithm and use something more robust?

Sounds possible, so I prototyped a naive implementation:

// Korean wpm example

// example text taken from Korean wikipedia: https://ko.wikipedia.org/wiki/%ED%95%9C%EA%B5%AD%EC%96%B4
const text = `한국어(韓國語, 문화어: 조선말)는 대한민국과 조선민주주의인민공화국의 공용어이다. 둘은 표기나 문법에서는 차이가 없지만 표현에서 차이가 있다.`

const w = [
	...new Intl.Segmenter(undefined, {
		granularity: 'word',
	}).segment(text),
].filter(({ isWordLike }) => isWordLike).length

// Korean wpm source: https://www.jkos.org/upload/pdf/JKOS057-04-17.pdf
console.log(w / 202.3)

// result: 0.08403361344537814m ≈ 5.0420168067s

But the locale matters. If we leave locale undefined, the locale will be determined automatically by system preference, which will make it hard for unit tests to be consistent.

@jcha0713
Copy link
Author

I agree that we should use more robust method if possible. I'm not sure if measuring the reading speed based on the semantic segments is the best way, but I guess it's the most optimal solution we have as of now.

In fact, I first created a library that uses the Intl.Segmenter before I submitted this PR but abandoned due to its performance issue. That was two years ago and it's possible that my code was broken. I might have to do some more research to improve it but in the meantime, please take a look at the repo if you're interested: jcha0713/better-reading-time (sorry for the rude naming).

I'm happy to do some more work based on this if wanted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants