Post Published on August 10, 2012.
Last Updated on April 28, 2016 by davemackey.
I read like most people sleep. I read books – of which I have far too many and I read web articles – of which there are way too many. I follow 50 RSS feeds, which I’d like to keep up with on a daily basis – but this is simply impossible. Sure, it might be possible if they only published one article each per day, but they don’t – they vomit out tremendous quantities of articles. If you doubt me, subscribe to a few of the more prolific feeds I follow – Ars Technica, BetaNews, GigaOm, Lifehacker, Mashable, PandoDaily, ReadWriteWeb, TechCrunch, and VentureBeat. Depending on the day, my feed reader spins through 250-500 stories. That is crazy!
There is one complete solution to this dilemma, which I am sure someone will urge upon me: stop reading RSS feeds, stops trying to keep up on what is happening in the world…Yes, that is a real option…but, when you want to know about the next hot product or what software you should use to accomplish x process, you need someone who follows the pulse of innovation…and I love content curation. Yes, there are others who can perform the job just as well as me – but then we’d eventually end up with no one doing it if everyone followed this advice…and, again, I like content curation.
I do have a partial solution, something I attempted to implement with my now-defunct Informed Networker (lets not go there). Essentially, we need a way to deduplicate the content. For example, the sites I mentioned above oftentimes cover the same story – it is not unusual for me to see similar stories on the same topic five times in a single day. We need a way to “know” that these are duplicate articles and then subsume them under the most definitive article. This allows an individual to explore the story further if they wish, but also keeps folks like myself from paging through 200 essentially duplicate articles each day, when all I really need to know is that x company has released y product, not everybody and their mother’s opinion on this release.
Now, there is a secondary aspect to this which I haven’t figured out how to implement…and which, since we still have trouble monetizing our content (to make money for the publisher/author), seems far in the distant future, but I’ll throw it out there. We need a way to incentive reading content. Some sort of kickback from the content producer to the content reader – or at least to the content influencer.
Now, the issue is that say one makes $2/CPM from views of one page – how does one return some of this to the end reader or influencer? This is a real dilemma, b/c you need that $2/CPM to keep bread on the table and the doors open for the business. Divide $2 by 1000 readers and you aren’t giving your readers anything tangible nor are you leaving anything for yourself. Perhaps if you gave 10% to influencers it would be enough…for example, say I read 100 articles in a day and I was “compensated” $0.01 per article, I’d make $1. Okay, never mind…that isn’t anything.
But do you see what I’m saying? Essentially we need a better way to float the best content to the top, to eliminate duplicate content (although leaving readily available thorough and diverse analysis of a topic), and finally we need a way to “focus” the attention of curators on the best information available and provide an incentive for them to do so…besides love of curation, which doesn’t put bread on anyone’s table.
2 thoughts on “The Information Overload Dilemma.”
For your first issue, you may want to check out Fever (http://feedafever.com/), an RSS feed aggregator that took me from a similar situation (hundreds of great RSS feeds) down to something that I can manage to spend 10 mins a day in and get entirely caught up. It surfaces the most important content by how many different feeds are talking about the same stories. I love it.
No association, I just like it and use it, thought it would be helpful to you.
Joshua – Thanks for the link. I’ll check out Fever.