[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [syndication] RFC: myPublicFeeds.opml



On Tue, Oct 14, 2003 at 08:11:15AM -0400, Dave Winer wrote:
> You say it "isn't a good idea" and don't explain why.
> 
> TBL says it isn't a good idea because the owner of the site owns the
> namespace, which makes sense, but robots.txt et al already offer
> features based on common file names.
> 
> TBL is bucking a long-term trend here. Operating systems have always had
> known locations for special configuration files.

I think it's important to use a well-known location as a fall-back
mechanism.  If a site provides a reference to the XML file in a <link>
tag, then aggregators/spiders/bot should use it.  Only if there's no
hint of a feed directory should they consider issuing a separate
request.

The need for a well-known location is clear in my mind.  There are a
lot of organizations and content management systems that do not make
it easy to add arbitrary content to <head> section of their documents.
And many will not want to regenerate megabytes of HTML that's already
sitting around.  They may not even have an easy way to do so.  We
should not raise the bar to entry too high.

To summarize, the reasonable compromise seems to be:

  1. Examine the page for a <link> tag that points to the mythical
     "feeds.xml" file and use it if it's there

  2. In the absence of any <link> tag, look in the well-known
     fall-back location.

If this sounds familiar, it's how many aggregators perform
auto-discovery today.

The impression I have is that the anti-namespace-pollution folks (such
as Tim) were willing to agree that a fall-back location was a
reasonable compromise.  But that may be wishful remembering on my part
too.

Jeremy
-- 
Jeremy D. Zawodny     |  Perl, Web, MySQL, Linux Magazine, Yahoo!
<Jeremy@Zawodny.com>  |  http://jeremy.zawodny.com/