Reader

Debug App Performance Down to the Function Call–Introducing Continuous Profiling & UI Profiling

| Sentry Blog | Default

When something slows down in prod, it’s too easy to fall into old habits. Throw in a few more logs, ship some metrics, try to reproduce the issue locally, and maybe reach for perf or py-spy if you’re feeling ambitious. Traces can help, but they usually stop just short of explaining why things are slow, especially when it’s deep in the stack. 

This means it’s often up to the developer to intuitively know why the app might be behaving in a certain way—bloated renders from prop drilling, unnecessary re-renders triggered by state changes, or that one third-party library quietly spinning up 500 event listeners. 

That’s where Profiling comes into play. It doesn’t just highlight the symptom— it shows you the exact function calls, files, and line numbers chewing up CPU. 

Today, we’re launching Continuous Profiling and UI Profiling, two powerful profilers that show you function-level insights into runtime behavior so you can find and fix bottlenecks faster.

Find & resolve backend bottlenecks with Continuous Profiling

It screamed on your development machine. Sailed through staging tests. But in production, under real load, that critical API endpoint sometimes just... drags. Or maybe a background worker randomly starts consuming way more memory than it should. Logs are frustratingly normal, traces confirm the service was hit but don't reveal the internal bottleneck. You're stuck trying to reproduce an elusive issue or guessing at hidden inefficiencies.

Continuous Profiling gives you always-on visibility into your backend. Great for long-running workloads, real-time APIs, and other code paths where “it was fine locally” just doesn’t cut it. 

Here’s where it helps:

  • CPU hotspots: When one endpoint suddenly starts hogging 3x the CPU and no one knows why—Profiling can point straight to the function-level bottleneck.

  • Batch jobs: Those nightly processes that quietly run for hours (and occasionally double your cloud bill)? Now you can see exactly where they’re spending time.

  • ML pipelines: We knocked 10 seconds off a latency spike in one of ours in about 10 minutes with Profiling. Turns out, one part of the model pipeline was doing way more than advertised – read about it here.

  • Analytics processes: Sometimes your “insight engine” is just a glorified loop over a million rows. Profiling helps catch the kind of inefficiencies that don’t show up in logs or metrics—until they hit production.

Backend services don’t have to be a black box when it comes to performance. Continuous Profiling can help you optimize infrastructure costs, reduce API response latency, and increase throughput, making it easy to spot inefficiencies before they blow up Slack.

We support Node.js and Python out of the box today.

Fix UI jank & lagginess with UI Profiling

The frontend is often the least forgiving part of your stack, where 50ms can mean the difference between buttery-smooth and rage-inducing. On mobile, expectations are even higher, with battery limitations and users running ancient devices that make troubleshooting difficult. One second your ListView is performing flawlessly, the next you're blocking on a network call and someone's rage-tapping while writing "laggy garbage 😡" in a review. 

UI Profiling captures code execution in real user sessions, letting you see exactly what’s dragging down responsiveness so you can deliver fast apps that actually feel good to use. 

Use it to:

  • Capture every function on the main thread to pinpoint code causing input delays, frame drops, and poor animations

  • Spot slow render functions, event handlers, and async callbacks are taking in real user flows

  • Identify repeated or bloated executions from third party SDKs, ad libraries, and UI components 

Next time your checkout screen stalls or a login page responds just slowly enough to confuse users, UI Profiling helps you pinpoint what's happening. It reveals exactly where your app spends its resources during startup—whether on bloated assets or blocking scripts—and helps you eliminate frustrating UI jank like stuttery screen transitions, layout shifts, or slow touch responsiveness.

Available for iOS and Android with browser profiling for JavaScript coming soon. 

Painting the whole performance picture

Tracing provides an essential high-level overview, mapping performance across services and identifying where slowdowns generally occur. Profiling complements this by diving deep into the specific function calls causing the delays, pinpointing exact file and line number locations. You don’t just know the general vicinity of your issue — you know exactly where to look.

Don’t we already have profiling?

For those keeping track, Profiling isn’t exactly new tech at Sentry. We first dropped our tools for visualizing and monitoring profiles back in 2022, and today, nearly 30,000 organizations use Sentry Profiling to debug their applications. While the initial version helped us catch issues in typical transactions, the 30-second cap wasn’t always ideal for longer tasks like background jobs and AI inference chains. 

To solve this, we’ve fundamentally changed how profiling works. We completely removed the old time limit and decoupled profiles from transactions, putting you in control with new config controls and APIs in the SDK. 

  • Start/stop (start_profiling() and stop_profiling()) allow you to determine in-code what defines the boundaries of your profile. For long operations like ML training, big data analysis, and long-running API requests, this can be as granular (or not) as you want it to be.

  • Session-Based Sampling (profile_session_sample_rate) Decide once per user session whether to profile, rather than evaluating for every transaction. This makes sampling more predictable and ensures you have a complete profile.

  • Flexible Lifecycle Management (profile_lifecycle) Choose between manual start/stop control or automatic profiling tied to trace events. Great for tailoring profiling behavior based on how granular you want your instrumentation to be.

  • Deprecated Configs profiles_sample_rate and profiles_sampler are officially retired in favor of more session-aware alternatives.

  • Unlimited Profiling Duration Profiles aren’t capped at 30 seconds anymore. Useful for analyzing longer tasks like batch jobs, ML pipelines, or full mobile sessions.

  • Transparent Billing You’re billed based on how much you use—measured in Continuous Profile Hours (backend) and UI Profile Hours (frontend). No hidden surprises, no fuzzy math.

These improvements mean you can run profiles continuously while having fine-tuned controls to dial in how much profiling data you want to create, consume, and store. 

Getting started is quick

To get started with profiling in Sentry, all you have to do is install the package and add 3-5 lines of config code. 

The easiest way to get profiling data in is to enable “trace lifecycle profiling” in your Sentry.init() (Node.js example) which ensures that every operation wrapped in a span is profiled (http requests, asset loads etc.) and then assembled in Sentry. If you want more control, simply change the profileLifecycle to manual, then add Sentry.profiler.startProfiler() before your code execution and Sentry.profiler.stopProfiler() after.

Already using Sentry? Head to settings to start your free Profiling trial. Check out the docs for setup details and jump into the discussion on Discord with any feedback or questions.

If you’re new to Sentry, try it free anytime or request a demo to get started.