Case in point, it only took me about an hour to automatically add my most recent Twitter updates to this blog. (Yes, the Twitter badges would have worked, but for performance and consistency reasons I try not to depend on third party Javascript.)
If you're reading this in a blog reader, then you can view the new Twitter updates by visiting the blog itself.
A short python script, shown below, is executed every five minutes as a cron job. The script fetches my Twitter feed as JSON, parses it, and writes it back out as an XHTML fragment to the filesystem. My WordPress theme later includes this file when processing the templates.
I could have written this as a WordPress extension, but that would have involved writing PHP, which I try to avoid. This also has the benefit of working equally well with any type of publishing system that can include arbitrary files.
The python script, saved as
fetch_latest_twitter.py
:
#!/usr/bin/python
# Load the latest update for a Twitter user and leave it in an XHTML fragment
import getopt
import simplejson
import sys
import urllib2
# After parsing
# data.created_at
# date.id
# data.relative_created_at
# data.text
# data.user.description
# data.user.url
# data.user.name
# data.user.location
# data.user.id
# data.user.screen_name
LAST_UPDATE_URL = 'http://twitter.com/t/status/user_timeline/%s?count=1'
TEMPLATE = """
<div class="twitter">
<span class="twitter-user"><a href="http://twitter.com/%s">Twitter</a>: </span>
<span class="twitter-text">%s</span>
<span class="twitter-relative-created-at"><a href="http://twitter.com/dewitt/statuses/%s">Posted %s</a></span>
</div>
"""
def Usage():
print 'Usage: %s [options] twitterid' % __file__
print
print ' This script fetches a users latest twitter update and stores'
print ' the result in a file as an XHTML fragment'
print
print ' Options:'
print ' -h --help : print this help'
print ' -o --output : the output file [default: stdout]'
def FetchTwitter(twitterid, output):
assert twitterid
assert int(twitterid)
url = LAST_UPDATE_URL % twitterid
f = urllib2.urlopen(url)
jsonstring = f.read()
json = simplejson.loads(jsonstring)
data = json[0]
xhtml = TEMPLATE % (data['user']['screen_name'], data['text'], data['id'], data['relative_created_at'])
if output:
Save(xhtml, output)
else:
print xhtml
def Save(xhtml, output):
out = file(output, 'w')
print >> out, xhtml
def main():
try:
opts, args = getopt.gnu_getopt(sys.argv[1:], 'h', ['help', 'output='])
except getopt.GetoptError:
Usage()
sys.exit(2)
try:
twitterid = args[0]
except:
Usage()
sys.exit(2)
output = None
for o, a in opts:
if o in ("-h", "--help"):
Usage()
sys.exit(2)
if o in ("-o", "--output"):
output = a
FetchTwitter(twitterid, output)
if __name__ == "__main__":
main()
The script is run every five minutes with the following crontab:
*/5 * * * * [/path/to/]fetch_latest_twitter.py [twitterid] --output [/path/to/your/wordpress/theme/]twitter.html
Obviously, replace
[twitterid]
with your twitter id, and adjust the paths accordingly.The Wordpress templates were each modified with the following simple line:
<?php include("twitter.html"); ?>
And my unimaginative CSS:
/*
* Twitter
*/
.twitter-user
{
font-size: .8em
}
.twitter-text
{
font-size: .8em
}
.twitter-relative-created-at
{
font-size: .6em
}
The only thing missing is my Twitter image, the URL to which is not included in the JSON structure the script fetches. If I decide I want that, I'll probably write a second script, also run via cron, that finds that image and saves it locally.
Again, great job, Twitter! A fun evening hack.