[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [syndication] Scraper code?
> The registry does not need to be centralized. foo can be wherever, it
> can vary from blog to blog. A centralized registry is a natural thing
> here, but is by no means a monopoly .... anybody could setup the script
> to run on their own server and the list of aggregators and their
> corresponding URI formats could be circulated via RSS.
So, site A has a feed that I'm interested in, and they use service B to
dispatch subscriptions. B knows about aggregators 1,2 and 3, but how does
it know about my own, hand-written, arse-kicking aggregator, Q?
Of course, you could have a register-your-aggregator service as well, so
that B becomes a sort of marketplace of feeds and aggregators, matching
them up as appropriate. However, as a user, I still have to go to B,
register myself as a user there, add my aggregator if they don't have it,
and hope that they have all of my feeds; otherwise, I have to manually
enter them. Nightmare. What's the point? Additionally, it still doesn't
address the case where my aggregator is behind a firewall, and can't
expose anything for B to talk to.
Using the Content-Type to dispatch a particular format (RSS in this case)
is so much simpler, robust, and - more to the point - it's how the Web is
designed to work.
--
Mark Nottingham