[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [syndication] site-wide metadata [was: RFC: myPublicFeeds.opml]




Le Mercredi, 15 octo 2003, à 14:43 America/Montreal, Mark Nottingham a écrit :
Put another way, the problem is with standards (whether a de facto
standard by a single vendor, or a Recommendation from the W3C, or an
IETF standard, etc.) specifying URIs for other people. It's true that
they can choose to follow that standard or not, but software will be
written assuming that that URI means something whether or not they do.
This has consequences for both the Web sites that don't support it, and
the software that's expecting a specific behaviour without any
agreement that it'll happen.

Another problem is when your site has already a content with exactly the same name, which in my humble opinion it's even worse. We are not anymore in the situation where the Web is starting. It's here now and people have developed content.

Imagine that a spec says you MUST have http://www.example.org/w3c/ where it is supposed to contain a bunch of metadata files and the index being the main file to point to resources.

Doh!!! This Web site already exists and for years. It's http://www.la-grange.net/w3c/ and it has some of the French translations of W3C specifications.

So a specification is coming and breaks one of my practice I have for years and finally a lot of links on the net which point to this resource.

favinco.ico has been a nightmare for my log for a long time. I have hesitated between created one or trying to break the user agent which were requesting it.

The robot.txt has come very early, so it has a bit of a different story, but in fact it's not useful to hide content from indexation because it shows where is the content, you don't want to be explored. An .htaccess semes always a better way.


I think RSS and autodiscovery link in HTML was the right think to do and it worked very well.


--
Karl Dubost