Userflix

Changelog

Latest Userflix product updates, shipped features, and improvements for AI-moderated research.

April 2026 — Ask Your Study

Query all transcripts in plain language and get answers grounded in the study's transcripts, ranked by relevance, with citations and follow-up prompts.

Once a study has responses, researchers can query across all transcripts in plain language. Ask any question, such as "What themes appeared most often around portability?" or "Did anyone raise concerns about taste or aftertaste?", and get an answer grounded in the study's transcripts, ranked by relevance.

It is not just a summary. It is a conversational interface into the raw data. Answers include participant-level citations, matching excerpts, and follow-up prompts so researchers can move from a broad question to a sharper one without leaving the analysis view.

This turns the transcript archive into an exploratory research surface: teams can test hypotheses, compare reactions, dig into objections, and trace insights back to the original participant evidence.

April 2026 — Text Interviews

Participants respond in writing at their own pace with AI moderation, structured question types, stimuli, uploads, and voice memos—on par with voice when creating studies.

Not every participant should have to speak, and not every research question needs a synchronous session. Text interviews let participants respond in writing: at their own pace, in their own time, over hours or days if needed.

The AI moderates the same way it does in voice: asking follow-ups, probing where relevant, keeping the conversation moving, and adapting to the participant's responses. Voice sessions and text threads now sit as equal formats when creating a new study.

Text interviews are more than a chat box. They support structured response types such as single choice, multiple choice, rankings, semantic differentials, NPS, star ratings, sliders, and comparison tasks. Participants can also react to image stimuli or embedded prototypes, upload their own screenshots or photos, and answer with short voice memos when typing is not the best format.

This makes text interviews useful for diary-style research, mobile contexts, async feedback, prototype reactions, and lightweight quant-qual tasks inside one moderated flow.

March 2026 — Study Composer Update

The guide editor is an agentic workspace with setup agent, block-based guideline, advanced settings, URL redirects for panels, and connectors like Notion, Linear, and Intercom.

The guide editor is now a full agentic workspace. Researchers can build a guide from scratch through conversation with the AI, upload an existing brief, or link to a company website, and the AI structures the guide from there.

The setup agent can extract product context from URLs, identify user types and domain vocabulary, and turn loose research input into a structured interview guideline. The right panel holds the Interview Guideline as a block-based editor, so researchers can keep iterating while seeing the actual guide take shape.

Advanced Settings now include interview mode, script adherence, maximum interview length, study limits, end date, redirect after completion, and internal terminology. URL redirects support smoother panel integrations, so participants can land directly in a study from an external source and return to the right place afterward.

The composer also supports external data connectors. Connected sources such as Notion, Linear, and Intercom can be enabled for the workflow, giving the setup agent more context when preparing the study.

February 2026 — Voice + Vision

Live sessions can use camera and screen so the AI moderates what it sees; richer stimuli, documents, links, video, and scales for multimodal research.

Participants can now show what they are looking at during a live interview. With one toggle in the study settings, the AI moderator gains access to the participant's camera and screen, enabling it to moderate on what it actually sees: a packaging concept, a prototype, a live website, or anything else the participant is reviewing in real time.

The live interview surface also became richer. The moderator can work with visual stimuli, documents, embedded links, videos, and rating scales during the session. This turns the call from a voice-only interview into a multimodal research environment where participants can react, explain, point, compare, and rate while the AI keeps the conversation moving.

January 2026 — Agent Orchestrator

Studies run end to end automatically—interviews, transcription, structure, and analysis with agents handing off—so researchers reach insights without manual data movement.

Studies now run end-to-end without manual handoffs. Once a study is set up, Userflix runs the full pipeline automatically: interviews are conducted, transcribed, structured, and analysed by agents that pass work to one another.

Researchers get to the insight stage without ever moving data by hand: no export, no copy-paste, no waiting on a handoff. The system moves from participant collection to transcript processing to analysis preparation as one continuous workflow.

This also changed the role of the researcher. Instead of coordinating operations, they can stay focused on the research question, the quality of the guide, and the meaning of the findings.

October 2025 — AI Report Builder

Turn completed study data into a structured research report—qualitative findings, quotes, scores, comparisons, methodology, and recommendations in one coherent first version.

After interviews are complete, Userflix can turn the collected study data into a structured research report. The report flow moves through transcript processing, feedback extraction, stimulus analysis, sentiment evaluation, pattern recognition, quote curation, category formation, quantitative summaries, journey mapping, insight prioritisation, recommendations, charts, and final summary writing.

The generated report combines qualitative findings, participant quotes, structured scores, concept comparisons, rankings, methodology, and recommendations. Instead of starting from a blank slide or document, researchers get a coherent first version of the analysis that they can review, edit, and share.