An experience syndicating Trac Timeline RSS feeds with PlanetPlanet

Alvaro del Castillo acs at gsyc.escet.urjc.es
Sun Oct 15 18:39:18 EST 2006


Hi guys!

First of all thanks for your work in PlanetPlanet. I have used it
several times without problems ... but I have found some interesting
tricks trying to use it with Trac Timeline RSS feed:

http://qualoss.libresoft.es/cgi-bin/trac.cgi/timeline

I have integrated several projects timeline feeds in one page using
PlanetPlanet:

http://projects.libresoft.es/

The problem has been that PlanetPlanet in order to create a unique
identifier and store the entry feed in the cache, it uses:

---------------------------
for entry in entries:
            # Try really hard to find some kind of unique identifier
            if entry.has_key("id"):
                entry_id = cache.utf8(entry.id)
            elif entry.has_key("link"):
                entry_id = cache.utf8(entry.link)
            elif entry.has_key("title"):
                entry_id = (self.url + "/"
                            + md5.new(cache.utf8(entry.title)).hexdigest())
            elif entry.has_key("summary"):
                entry_id = (self.url + "/"
                            + md5.new(cache.utf8(entry.summary)).hexdigest())
            else:
                log.error("Unable to find or generate id, entry ignored")
                continue
---------------------------

In the RSS feed from Trac timeline several entries share the same link
(for example, to go to a modified Wiki pages) so using this identifier
makes PlanetPlanet cache to overwrite different RSS entries and broke
the Trac integration we are looking for.

In order to solve the problem we have added to the entry link field, the
date field also, so all the entries are different for PlanetPlanet and
Trac RSS is shown completely.

------------------------------
+++ __init__.py 2006-10-15 10:22:20.266702153 +0200
@@ -739,7 +740,7 @@
             if entry.has_key("id"):
                 entry_id = cache.utf8(entry.id)
             elif entry.has_key("link"):
-                entry_id = cache.utf8(entry.link)
+                entry_id = cache.utf8(entry.link + entry.date)
             elif entry.has_key("title"):
                 entry_id = (self.url + "/"
                             + md5.new(cache.utf8(entry.title)).hexdigest())
------------------------------

I understand that PlanetPlanet has to fight with RSS feeds with
duplicates entries and this filter is good for other feeds but maybe you
should try to enable this option in someway in the PlanetPlanet config
so the good RSS feeds can pass trough PlanetPlanet cache without
problems.

I am not subscribed to the list so for comments about this thread please
CC me.

Cheers and thanks for your work guys!

-- 
Alvaro del Castillo
acs at gsyc.es



More information about the devel mailing list