Today I discovered a service called mail2RSS which converts mails you send to a predefined adress to a RSS feed. Quite handy, I have to try if it also handles multiple newsletter subscriptions and splices them to one aggregated feed.
I guess that means I can ditch my attempts to program this particular functionality in PHP, especially since I failed miserably on decoding MIME attachements like for instance pictures. I wanted this because I needed a way to display the really funny nichtlustig.de comics (sorry only in german!) in my feedreader instead of getting them in my eMail box.
What I came up with was a PHP script that reads from a specified mailbox and then constructs a new RSS item with a link to the latest comic but I wasn’t able to get the image to display properly without just focusing on one particular message format.
Well, I’ll try mail2RSS and see if it can do everything I wanted in my app.
I’d like to present to you the latest project I worked on. It is called FeedMonsun and is an online RSS aggregator which Herbert & myself did for programming for distributed systems at our uni.
A few thoughts on how the whole Web 2.0
hype thing might interfere with search engines.
After looking into all the new possibilites that come up with AJAX I came to think a bit about how Searchengines index pages and how the semantic web might be influenced by those new technologies. If people use AJAX more and more (which I hope they do) to create less web-like user interfaces which update information dynamically, searchengines won’t be able to get a view of all the information available on a specific website.
The possible solution I came up with might be something like a mashup between robots.txt and webservices. If a web aplication could offer a webservice for search robots that spits out XML rendered content of the information available on the page (behind the scenes in the database) the searchengine could easily index it and map it’s context, available in the XML structure, to the content. Another advantage would be that the sites could determine exactly which information should be found by searchengines and which should only remain on their site.
One offspring of this concept would be that services like the UDDI could be build up, that will be searched by the search robots, thus making it very easy to promote websites in a very descriptive manner. (Remind me to start such a directory website, when the concept kicks off. So I can charge customers for being listed and make loads of money )