Connected TV

Smart TV and connected TV app development

Device fragmentation, browser engines, playback pipelines, and how to structure a build and QA matrix for connected TV apps.

Multiple smart TV screens showing different stages of streaming app development

Building apps for smart TVs means building for a landscape where no two device families work quite the same way. The browser engine versions differ. The playback pipeline APIs differ. Memory limits, input models, DRM support, and even the definition of “app lifecycle” vary between Samsung, LG, Roku, and every other platform. This page covers the fragmentation problem head-on, walks through the key technical areas that cause the most trouble, and offers a framework for structuring your build and QA process so it does not become unmanageable.

Device fragmentation: the central challenge

Smart TV fragmentation is not like mobile fragmentation. On mobile, you have two platforms (iOS, Android) with relatively predictable hardware capabilities and well-documented APIs. On connected TV, you have:

  • Samsung Tizen running a Chromium-based web engine, but the Chromium version depends on the TV model year. A 2020 Samsung TV and a 2024 Samsung TV may have significantly different CSS support, JavaScript performance, and media API behavior.
  • LG webOS also running a Chromium-based engine, but a different version, with a different update cadence, and with Luna Service APIs for system integration that have no equivalent on other platforms.
  • Roku running its own OS with BrightScript/SceneGraph, which is a completely different development model: no web engine, no JavaScript, no DOM.
  • Google TV / Android TV running Android with the Leanback framework, which is native (or web-wrapped) rather than browser-based.
  • Fire TV is Android-based but with Amazon’s own launcher, app store, and device-specific behaviors.
  • Vizio SmartCast, Hisense VIDAA, and various other proprietary platforms that each have their own constraints.

This is not a problem you solve once. Each year brings new model year hardware with updated (or sometimes not updated) software. Your app needs to work on the current generation and typically two to three previous generations, which means supporting a range of Chromium versions, media API capabilities, and hardware performance levels simultaneously.

Browser engines and what they actually support

For the web-based smart TV platforms (Samsung Tizen, LG webOS), your app runs inside a Chromium-based rendering engine. But calling it “Chromium” can be misleading. The actual capabilities depend on:

  • Chromium version: A 2019 Samsung TV might run Chromium 56. A 2024 model might run Chromium 108. The gap in feature support between those versions is enormous.
  • Custom patches: TV manufacturers patch the Chromium build for their platform. Some patches fix bugs. Some introduce new ones. Some disable features that work in standard Chromium.
  • Hardware acceleration: Which CSS properties are GPU-accelerated varies by device. What looks smooth on one TV stutters on another because a specific transform or filter is being software-rendered.

The practical result is that you cannot assume modern web platform features are available. Intersection Observer, CSS Grid, modern ES syntax, web workers, all of these need to be validated against your target device matrix. Transpiling and polyfilling helps, but some things cannot be polyfilled (like missing media API support).

Playback pipelines: MSE, EME, and platform players

Video playback on smart TVs typically works through one of three mechanisms:

Media Source Extensions (MSE) is the web standard for adaptive bitrate streaming in the browser. You create a MediaSource object, attach it to a video element, and feed it media segments through SourceBuffers. This is how most web-based smart TV players work (Shaka Player, dash.js, hls.js all use MSE under the hood).

The catch is that MSE implementations on smart TVs are not all identical. Buffer eviction behavior differs. Codec change handling differs. The timing of events like “updateend” can vary. Appending segments too quickly can crash the buffer. Appending them too slowly can cause playback stalls.

Encrypted Media Extensions (EME) handles DRM. EME provides the JavaScript API for requesting license keys and managing decryption sessions. On smart TVs, EME typically connects to Widevine (most common) or PlayReady. The implementation details matter: some devices handle license renewal during playback smoothly, others interrupt playback briefly, and some fail silently until the session expires.

Platform-specific player APIs are sometimes available as alternatives to MSE. Samsung provides AVPlay, an API that handles media playback at a lower level than the web media stack. LG provides similar native player integration. These platform players can offer better performance and reliability than MSE on some devices, but they tie your code to a specific platform and may have their own quirks.

Our general recommendation: use MSE with a well-tested player library (Shaka, dash.js, or hls.js depending on your format requirements), but be prepared to work around platform-specific MSE implementation differences. Maintain a matrix of known device-specific workarounds.

DRM on connected TV devices

DRM on smart TVs is mostly Widevine, with some PlayReady on older devices or specific platforms. The challenges are:

  • Security level inconsistency. Some TVs support Widevine L1 (hardware-backed); others only support L3 (software). Content providers often require L1 for HD content. You need to know which security level each target device supports and handle the fallback case.
  • License persistence. Some devices cache licenses reliably. Others do not. For offline playback or long viewing sessions, you need to handle license renewal without interrupting playback.
  • Key rotation. For live content with key rotation, the device needs to handle mid-stream key changes smoothly. This is an area where device-specific bugs are common.
  • Individual device provisioning. Some devices need to be provisioned (receive device-specific credentials) before they can obtain DRM licenses. Provisioning failures are a support burden because they are hard to diagnose remotely.

Test DRM early and test it on real hardware. Emulators and simulators do not accurately reproduce DRM behavior.

Captions and subtitle rendering

Subtitle support sounds simple until you try to get it working consistently across a dozen device families. The issues:

  • Format support varies. WebVTT is broadly supported. TTML/DFXP works on most platforms but rendering differs. Embedded 608/708 captions have inconsistent support.
  • Styling differences. Caption styling (font size, background color, positioning) renders differently across platforms. Accessibility settings on the device itself may override your styling, which is actually correct behavior, but it means you cannot guarantee pixel-perfect caption rendering.
  • Timing accuracy. On some devices, captions display slightly ahead of or behind the audio. This is usually a player or MSE buffer timing issue, not a caption format problem.
  • Multi-language support. Switching subtitle languages during playback triggers different behavior on different platforms. Some handle it smoothly. Others reset the playback buffer.

The safest approach is to use WebVTT for text captions and test on every target device. Budget time for caption QA, it takes longer than you think.

Memory pressure and long-session stability

Smart TVs have limited RAM, typically 1.5 to 3 GB total, with a significant portion used by the OS. Your app might have 500 MB to 1 GB available, depending on the platform and what else is running.

Memory issues manifest as:

  • Gradual slowdown during long viewing sessions as JavaScript heap grows
  • Image loading failures when too many thumbnails are loaded simultaneously
  • Player crashes during extended playback, especially with DRM license renewal
  • App termination by the OS when memory pressure gets too high

Mitigations:

  • Aggressively unload off-screen images and components
  • Use object pooling for frequently created/destroyed UI elements
  • Monitor JavaScript heap size in debug builds
  • Test 4-hour sustained playback sessions on your lowest-spec target device
  • Profile memory after navigating through your entire app and back; look for things that do not get collected

How to structure a build and QA matrix

You cannot test every app feature on every device. The matrix would be too large. Instead, prioritize:

Tier 1 devices (test everything on these): current-year models of your highest-traffic platforms. Typically the latest Samsung Tizen TV, the latest LG webOS TV, and one Google TV device. Run full regression tests on these.

Tier 2 devices (test critical paths): previous two model years of Tier 1 platforms, plus any device with known quirks. Test playback, DRM, navigation, and error recovery. Skip edge-case UI tests.

Tier 3 devices (smoke test only): older models and lower-traffic platforms. Verify that the app launches, content plays, and basic navigation works. Do not invest in deep testing.

What to test on every tier:

  1. Cold start to first playback
  2. DRM license acquisition and playback of encrypted content
  3. D-pad navigation through main user flows
  4. Subtitle display in at least one language
  5. Background/foreground transitions during playback
  6. Graceful error handling when network drops mid-stream

What to automate where possible: smoke tests (app launches, API responses), screenshot comparison for UI regressions, playback start verification. Full playback QA still requires manual testing on real hardware because device-specific issues are too varied to predict.

Build your QA matrix as a living document. Update it every model year when new hardware ships. Track known device-specific workarounds in a searchable format so the team can reference them during development.

Platform-specific details

See our individual platform pages for device-specific notes and references.

Platform Notes