Case Studies

Research Case Studies

Research That
Drives Change

Note: The following case studies represent a curated selection of my work at an enterprise technology division within a global holding group. Client identity has been anonymised. All research design, findings, and recommendations are my own.
Case Study 01 · Large Enterprise SaaS Platform

Customer Support & Knowledge:
Resolution Journey & Impact

RoleLead Researcher
MethodSemi-Structured Interviews
TypeQualitative + Quantitative
11
Participants
60min
Per Session
3
Hypotheses Tested
~80
Screener Responses
Global
Coverage

Platform users were raising support requests but losing trust in the outcome. After an initial response, communication went silent — no status, no owner, no ETA. Issues persisted and workarounds proliferated. The product and support teams had no structured evidence of where the resolution journey was breaking down or what the real cost to users was.

Understand where users get stuck, redirected, or delayed across the full post-submission support journey. Quantify how long issues persist and the tangible impact on delivery. Identify what users need in terms of communication, speed, and ownership — and translate that into prioritised product and operational recommendations.

I designed a branching discussion guide with three distinct paths — problem resolution, quick fix, and feedback/inquiry — to capture divergent experiences without forcing participants into a single narrative. 60-minute remote sessions, sole researcher end-to-end. Quantitative signals layered from the customer service platform (3 and 6 month windows) and ~80 screener responses. Analysis conducted using HeyMarvin, cross-validated manually.

  • Fast initial response raises expectations — then communication disappears entirely
  • 10 of 11 users described support as a “black box” after the first reply
  • Tickets auto-closed as “resolved” while the problem persisted
  • 11 of 11 users built manual workarounds due to unpredictable timelines
  • One user worked a 17-hour day compensating for platform issues on a live client project
  • Feature requests acknowledged politely, then vanished without trace
  • Poor urgency triage: critical issues treated identically to low-priority queries
“They said they would get back to us, but they didn’t get back for a whole day. I followed up. Never heard back. Then they closed out the ticket over the break.”
— Talent Acquisition Partner, Global Media Agency, US
“Something that should have taken 10 minutes took me two hours. On a day I planned to work normal hours, I ended up working 17 hours.”
— Data Analysis Lead, Global Creative Agency, US
“I don’t get ticket numbers. Most help desks, you get a ticket number you can see — just to manage my expectations.”
— Finance Manager, Global Holding Group
“If I had an urgent question I needed answered within a couple of hours, I probably wouldn’t raise a ticket. I’d just sort it myself and move on.”
— Strategist, Global Agency Network

Build a Transparency Hub — an in-app “My Tickets” dashboard showing live status, owner, SLA countdown, and ETA for every open request. Mandate proactive “heartbeat” updates every 24–48 hours on priority issues, even when nothing has changed. Stop silent ticket closures: require user confirmation before any issue is marked resolved. Launch a public-facing status page for platform-wide incidents. Introduce urgency triage at intake with mandatory business impact fields and published severity SLAs. Productise the most common workarounds into self-service flows so users are never dependent on support availability for routine tasks.

📌
Research Delivered
Findings and recommendations were presented to the Senior Director and product, support, and design leads in January 2026. The Transparency Hub and proactive communication recommendations are under active consideration for the product roadmap. This was the first structured evidence of the resolution journey’s failure points — and the first time support costs were quantified in terms of user time and delivery risk.

Case Study 02 · Large Enterprise SaaS Platform

Access Management & Group Permissions:
Governance, Architecture & Operational Burden

RoleLead Researcher
MethodInterviews · Heuristic Eval · Survey
TypeMixed Methods
20+
Interviews
1,500+
Survey Inputs
3
Critical Themes
7
Lifecycle Phases
Global
Coverage

The platform was scaling across global agency markets, but the access management system was fundamentally broken. Admins could not govern who had access to what. Platform environments drifted instance to instance. Ex-employees retained active access. Workarounds proliferated — including one admin who built a custom script just to audit permissions. No structured research existed to make the case for prioritisation to product leadership.

Diagnose where the access management model breaks down across the full project lifecycle, and at what cost. Understand how admins are currently governing access, where architecture fragments, and what the operational and compliance impact of current failures is. Provide product leadership with a causal account of why the system fails at scale — not just a list of bugs.

I led the evaluative interview track alongside a parallel heuristic evaluation (Nielsen-Molich framework, full platform). All findings were mapped to a 7-phase project lifecycle — Lead/Pitch through to Scale — with severity rated per phase. Three independent evidence streams (interviews, heuristic evaluation, and a ~1,500-person proficiency survey) gave the findings a robustness difficult for stakeholders to dismiss. Analysis conducted using HeyMarvin, cross-validated manually.

  • 10/10 admins reported broken or unpredictable permission behaviour
  • No audit trail — impossible to see who had access to what, or when it changed
  • Ex-employees retaining active access; regions granting Owner to entire teams as a workaround
  • 3–6 versions of the same folder structure across one client’s markets
  • A global FMCG client required ~200 clicks/hour to update 8 of 150 workspaces = 20+ hours admin
  • One admin built a custom Puppeteer script to audit permissions — the tooling didn’t exist
  • Governance failures directly blocked commercialisation with enterprise clients

I developed a causal chain framing mid-synthesis to show stakeholders why the three themes compound — and why fixing symptoms without addressing the root cause would not solve the problem.

01
Poor Governance
Roles unpredictable. No audit trail. No boundaries.
02
Breaks Architecture
Instances drift. Templates splinter. No inheritance.
03
Multiplies Burden
Admins bottlenecked. Manual work explodes.
04
Produces Workarounds
Owner access for all. Custom scripts. Data leakage.
“I can’t tell who actually has access — so I assume everyone does. We have ex-employees who can still log in. That’s a red flag for any enterprise.”
— Global Platform Admin, Media Agency, EMEA
“Every market is running its own version of the platform. None of them match. We duplicate the same project across 20 markets — one change becomes 20 changes.”
— Platform Lead, Data Agency, AMER
“I spend more time fixing the platform than doing my job. The system makes the wrong way the only practical way.”
— Senior Admin, Creative Agency, AMER
“Permissions feel like guesswork — and guessing is not an option with client data.”
— Admin, Global Agency Network, EMEA

Immediate: add a basic audit log so admins can see who has access and when it changed; standardise role definitions across all platform instances; introduce regional guardrails to prevent cross-instance permission bleed; reinstate bulk permission management via CSV upload. Longer term: group-based permissioning with role inheritance across workspace, project, tool, and agent levels; a global template bundle with version control and propagation; an agent governance framework covering ownership, versioning, and access. All recommendations were sequenced against severity ratings across the 7-phase lifecycle.

📌
Research Delivered
Research was delivered to the Senior Director and Governance, Platform, and Product teams in December 2025. The audit log, role definition consistency, and bulk permissions recommendations are under active prioritisation. The causal chain framing has been adopted as the working model for the access management roadmap discussion — the first time governance, architecture, and operational burden had been connected in a single strategic narrative.