@Cloudvisionbot: An Experiment in AI APIs
I found myself in search of a quick project this weekend, eventually settling on building yet another Twitter bot.
Over the course of deciding what it was, exactly, that I wanted this bot to do, I had an idea: Why not experiment with the recently introduced Google Cloud Vision API in order to tweet simple descriptions of the content of randomly selected stock photos? After all, with the APIs already in place, stitching everything together would certainly be easy enough.
As I had suspected, it really was that easy: so dead scarily easy, in fact, that I was able to get the thing working in a little over half an hour starting from scratch. Not only was the Cloud Vision API simple to configure, there were already numerous client libraries for it that were freely available on NPM. After registering for GCP and the bot’s own Twitter, it was only a quick matter of getting things working together in harmony.
And so, @cloudvisionbot was brought into the world.
Much like its highly philosophical counterpart, CVbot runs on Node.js and is built upon Red Hat’s excellent Openshift PaaS (which, incidentally, allows you to host up to 3 application instances free of charge. Perfect for these sorts of small bots!).
Every hour, our curious little bot selects a beautiful image from Unsplash at random, retrieves some descriptive matter from the Cloud Vision API, and then tweets its findings. Very nice!
I can see horizon, mountain, and morning. pic.twitter.com/FYHr7hAB6r— Cloud Vision Bot (@cloudvisionbot) May 22, 2016
An obvious description, yes, though eerily human.
As always, all of the code behind this project is available on my GitHub, and is likely to improve over the next few days as I get everything fine-tuned.