Once considered a topic of science fiction, facial recognition technology has become a reality for organizations all over the world. Now, instead of manually organizing and analyzing facial data from photo input, this API can digest the photo for us, spitting out the position, gender, age and, in the case of celebrities, the identity of the person in the photo. This information is critical for applications to glean demographic data from images, which can be useful when analyzing a person’s social media habits or for analyzing which images have the highest return on investment in advertising campaigns.
No matter how adamantly people deny it, everyone is guilty of eavesdropping at one point or another; it’s basic human nature. Most of the time, the idle gossip that we overhear is nothing more than that, and simply keeps us distracted while sitting at the airport or waiting in line for morning coffee. But sometimes, these conversations spark our imagination. This happened to me recently when I overheard a few young men talking about the concert they had seen the night before. I was intrigued, not by their assessment of the band, but rather by the natural language processing (NLP) application. To provide context, here is the gist of the short discussion:
One of the true pleasures of my job is seeing the problems developers solve using our services. I see intriguing initial ideas, projects in process, and truly original complete applications. Today, I am excited to share ADapTV, an application created at the ComcastNBCU Hackathon by Max Maybury and Darren Gilbert.
Max is a student at the University of Bath, currently on a placement year at Techex. This is where he met Darren, a graduate of Staffordshire University. Together, they built an ad targeting system called ADapTV, built upon the AlchemyLanguage keyword extraction and the AlchemyVision image tagging capabilities.
Companies and people around the world are developing game-changing solutions with applications powered by Watson and AlchemyAPI services. IBM Watson recently hosted a hackathon at the first annual World of Watson (WoW) event in Brooklyn, New York. Almost 170 developers water-taxied across the East River to the Duggal Greenhouse, where they were given 48 hours and full access to all Watson and AlchemyAPI services to build innovative cognitive apps. These developers leveraged any of the 15+ services available on Bluemix, individually or in combination. The result was 40 unique applications that combined APIs such as Trade-Off Analytics, Personality Insights, and Sentiment Analysis. The use cases for these applications range from empowering middle schoolers to choose the best high school based on their personality, to facilitating connections between like-minded people based on their geographic location. In just 2 days, developers created powerful cognitive applications that will improve lives.
Francis Crick and James Watson cracked the DNA code in 1953. The Human Genome Project was completed in 2003. Today, genome mapping can be accomplished in hours. What’s the point of these examples? AlchemyAPI, an IBM company, and other companies are working to crack the code on what is known as unstructured data. All the while, there’s an argument being made that there really isn’t any unstructured data at all. Instead, the problem is finding the best way to uncover the structure that exists, just like first steps in modeling the double helix of DNA, and then taking that breakthrough to higher levels.
Topics: Deep Learning & Machine Learning
After running the first part of this series I had a question come in from Charles Cameron (@hipbonegamer) a well known author and terrorism researcher:
Topics: User Stories
Entity extraction is becoming a critical tool for detecting mentions of people, companies, cities, geographic features and other typed entities in massive quantities of text. It is one of the most common starting points for using natural language processing to enrich your content. In business intelligence, ad targeting, content recommendation, and social media monitoring, entity extraction can add a wealth of semantic knowledge to your content.
The other day I had a conversation with a buddy who spent years working as a Marketing Manager for the NBA (National Basketball Association). We talked about how college and high school sports writers have an uncanny and mostly unbiased perspective of today’s athlete. They spend thousands and thousands of hours studying and analyzing individual athletes. What if we could collectively and holistically hear every sports writer’s assessment on every football athlete and find sentiment patterns in the results? What if those patterns told a story that is not currently being told? What if we could find a correlation between athletes who are not heavily sought after by professional teams who later become great professional athletes, the athletes known as “sleepers”? What if we could use sentiment analysis and other tools to see what writers are saying about these sleepers before they become legends?
Topics: User Stories
Imagine you are an application developer for IBM. Your team has been approached by the technical support manager to help automate the analysis of a massive database of emails. The team wants to consolidate results and better understand IBM’s customers. As you build the application, you want to identify the main keywords customers use to describe their problems, the sentiment they associate with your brand, the entities that relate and the most common concepts tagged.