[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [syndication] Scraper code?



----- Original Message -----
From: "Mark Nottingham" <mnot@mnot.net>

>
> The "Internet" way to do this is to use a media type in the HTTP
> Content-Type header (and maybe a 'type' hint on 'a' and 'link' links) to
> dispatch to the appropriate software; so, when a browser sees
>    Content-Type: application/rss+xml
> it knows to send it to the aggregator that the user has configured. The
> problem here is that some aggregators aren't local software; they live on
> the Web at other URIs. Unfortunately, I don't know of any browsers that
> allow Content-Type dispatch to another URI; I have logged a bug [1] with
> Mozilla.
>
I've thought of making a generic client for publishing to blogs and
syndicating links, etc. - generally anything that 'writes' back to the web.
This would have hooks into some clients (IE essentially) for
context-sensitive menus (like the 'blog this!' menu & others) when a user
clicks stuff on the desktop - both links in browsers and files on disk.

Services could hook into this by providing a service description identified
by a Content-Type (like "application/x-www-clientservice") and the body
would have details of what to hook into and where to send info and what
format that info would be.

This would allow me to 'blog this' and point to Blogger, MoveableType, etc.
all within one menu. Each service could offer a link that when clicked would
be dispatched to this generic client and register the service with theh
client.

What do people think?