The Phonic team has been working hard the past few weeks and just launched a massive update live on Product Hunt. Although there are improvement across the platform, the centrepieces of this update are two new features: Showreels and Auto-Coding.
Showreels provide a natural extension of Phonic's audio and video ingestion that give researchers one more weapon in their storytelling toolkit. Aside from being longer and more descriptive, voiced responses are full of emotion and tone of voice, with video further supplementing facials expressions and body language. All of these signals convey subtext to humans, and they are why we find videos so compelling.
Our focus when designing showreels was simplicity. We wanted to give researchers all the tools the needed to create powerful videos, but we didn't want all the bells and whistles of a full video editor in the cloud. Things like subtitles and audio animation should just work, and it should be easy to trim and add media from multiple different studies.
The resulting editor looks and feels familiar, with a preview in the top right and a timeline across the bottom. Clips can be dragged into the timeline, re-ordered, and trimmed to length. Subtitles can be easily enabled, and any audio clips are animated to maintain the audiences focus as they switch from audio to video.
Read more about showreels here.
Response Coding Features
The recent Phonic update introduced tags that could be applied to individual responses. These allow researchers to create a code frame and tag their responses directly inside of Phonic.
Tags also make filtering responses a breeze.
The problem with tags is that they take time to apply manually. The first way to speed this up is by enabling topic generation, a survey setting that parses responses for themes and assigns common tags. Topic generation kick-starts the coding process by seeding topics with keywords and themes directly mentioned. For example, mentions of the "President" would automatically be tagged and grouped under the president tag.
If manual coding is a bicycle and topic generation is a Ford Explorer, Auto-Coding is a Ferrari.
Auto-coding (sometimes called auto-classification) takes automatic coding to the next level by automatically grouping responses into pre specified classes.
For example, the below sample data is from a survey collecting granola bar feedback. Here we construct a frame with 2 classes: "likes granola bars" and "doesn't like granola bars". Despite the language being very casual ("I'm not a fan"), the auto-classification still correctly predicts that the respondent doesn't like granola bars.
This example is cute, but isn't very realistic since we could always just ask respondents if they like granola bars or not. A more realistic situation occurs when we ask an open-ended prompt like "Why do you typically purchase granola bars" with a large set of possible responses. In this case, we can construct a set of codes after the fact. For example, we can test the above response on the possible classes "travel", "sports" or "health". Once again, we see that only travel is strongly associated with the response, and only the travel tag would be assigned.
This idea can be extended indefinitely, and responses can be tagged into extremely abstract classes:
- Who likes your product and who doesn't
- Who is authentic, sarcastic, rude, or confused
- Which responses are grammatical
- Any other open-ended query imaginable
We'll write at greater length on how to get the most out of auto-coding in the future, but this demo provides a useful starting point for getting started with AI augmented research.
- Ranking question type
- Toggle to disable auto-refresh in the question builder