0 / 2 embers
0 / 3000 xp
click for more info
Complete a lesson to start your streak
click for more info
Difficulty: 8
click for more info
No active XP Potion
Accept a Quest
Login to submit answers
Feeds are essentially just lists of posts. A post represents a single web page. The entire point of the gator
program is to fetch the actual posts from the feed URLs and store them in our database. That way we can display them nicely in our CLI.
Enhance the agg
command to actually fetch the RSS feeds, parse them, and print the posts to the console--all in a long-running loop.
time_between_reqs
is a duration string, like 1s
, 1m
, 1h
, etc. I used the time.ParseDuration
function to parse it into a time.Duration
value.
Collecting feeds every 1m0s
when it starts.time.Ticker
to run your scrapeFeeds
function once every time_between_reqs
. I used a for loop to ensure that it runs immediately (I don't like waiting) and then every time the ticker ticks:ticker := time.NewTicker(timeBetweenRequests)
for ; ; <-ticker.C {
scrapeFeeds(s)
}
Do NOT DOS the servers you're fetching feeds from. Anytime you write code that makes a request to a third party server you should be sure that you are not making too many requests too quickly. That's why I recommend printing to the console for each request, and being ready with a quick Ctrl+C
to stop the program if you see something going wrong.
The agg
command should now be a never-ending loop that fetches feeds and prints posts to the console. The intended use case is to leave the agg
command running in the background while you interact with the program in another terminal.
You should be able to kill the program with Ctrl+C
.
There are no CLI tests for this lesson, test your own program and make sure everything behaves as expected. Here are a few RSS feeds to get you started:
https://techcrunch.com/feed/
https://news.ycombinator.com/rss
https://blog.boot.dev/index.xml
Become a member to Complete
Become a member to view solution