
In Disney Magic Match 3D, players are invited to restore order to the enchanted Book of Magic, whose magic has gone awry, spilling iconic Disney and Pixar items across the pages. Players must match enchanted objects, organize the whimsical clutter, and rediscover the joy of tidying up to restore order to the Book of Magic – one cozy level at a time.
Throughout development, each Jam City team set ambitious goals: The art team pushed for high visual fidelity within strict poly budgets; the product team optimized for ideal session time targets; designers ensured a steady flow of fresh content, and QA testers required powerful tools. Meanwhile, Jam City engineers prioritized world-class technical performance, stability, snappiness, and more across all platforms. Here’s how they navigated complex challenges to bring their vision to life.
How does a studio maintain performance while managing challenges such as overheating, content delivery, and other technical issues?
During production, rendering hundreds of 3D objects with physics in a single level meant being careful with lighting, post-processing, and poly counts.
“Overheating was a major issue early on, especially with 20-minute sessions – phones would get uncomfortably hot,” says Hebby Mathew, a software engineering manager at Jam City and tech lead on Disney Magic Match 3D. “We addressed this by improving performance and developing a way to consistently measure device heat across various hardware.”
Another challenge was content delivery – streaming assets asynchronously without affecting the frame rate. “Unity’s extensible Editor helped us build efficient tools to manage when and how content was loaded,” says Mathew.
Finally, while profiling the game across low-, mid-, and high-end devices to identify performance bottlenecks, the team identified key issues like excessive collision calls and high-poly models, especially for lower-end GPUs.
“Optimizing those brought both performance gains and helped resolve the overheating problems,” says Mathew.
This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.
- Increased the frame rate by 40 fps on low-end and 35 fps on mid-end Android devices
- Reduced the amount of draw calls by 40% on iOS and 45% on Android devices
- Dropped the RAM usage by 30% on iOS and 15% on Android devices
- Decreased the non-network processing load time by 55%
- Shrunk the number of meshes with over 10,000 triangles from 33 to 0
- Cut the number of meshes with 5,000 to 10,000 triangles from 324 to 235
- Lowered the light count by 67%

When the team encountered performance issues, they turned to profiling tools for deeper insight.
“Unity Profiler markers were especially helpful for tracking multi-frame tasks,” says Kevin Johnson, software engineer at Jam City. “I also used the Profiler Analyzer for comparisons.”
Andrea Ramirez, another engineer, extended the analysis using binary log files to capture specific frame snapshots. “That let us analyze performance without needing the profiler live,” she explains.

The team customized the Profiler further, adding disposables to streamline triggering and logging. Beyond CPU profiling, they monitored memory with the Memory Profiler and resolved rendering issues using the Frame Debugger. Profiler Recorder allowed them to track global performance stats during repeatable tests.
“One of the engine’s biggest strengths is how extensible the Editor is,” Ramirez adds. “By capturing multi-frame snapshots and scripting tests, we gathered consistent, reliable performance data.”

Profiling revealed major performance spikes at level start, primarily caused by shader compilation when assets appeared for the first time. Using Addressables, the team identified duplicate assets triggering unnecessary shader compiles.
“We preloaded UI elements and game pieces earlier,” says developer John Enney. “Improperly grouped asset bundles were a key issue, which caused repeated shader compilation – something I initially overlooked.”
Early on, they integrated Quick Outline to add piece highlighting quickly. While it introduced some overdrawing – extra rendering overhead – optimizations allowed them to keep it without a full rewrite.
Audio also contributed to performance spikes. “Each game piece triggered its own collision sound,” Enney explains. “That worked fine initially, but with 300 pieces dropping simultaneously, frame times spiked to 500ms. Replacing those multiple sounds with a single ambient drop sound reduced that to 20-30ms – and it actually sounded better.”

Haptic feedback required fine-tuning as well. “We limited haptic calls during mass collisions,” says tech director Troy Lee. “Applying this restraint improved performance consistently throughout the game.”
Lighting presented another challenge. Although deferred rendering improved performance, it conflicted with artistic goals. “Forward rendering struggles with multiple lights because it requires expensive passes per object,” explains Ramirez. To balance quality and efficiency, Enney developed a baked lighting solution that preserved visual fidelity while enhancing runtime performance.
Geometry complexity was also a concern. “Our initial models exceeded 10,000 triangles,” says lead technical artist Kristen Weeks. “We reduced most models to under 2,500 triangles to ensure good performance on lower-end devices.”

Early on, asset loading and system initialization involving web calls caused major issues. To avoid slowdowns during gameplay or UI interactions, the team initially loaded all assets upfront. However, as the game’s features and assets expanded, startup times became excessively long.
They optimized by moving some assets to load asynchronously in the background, reducing initial load times while maintaining a smooth in-game experience. They also parallelized loading tasks using .NET’s task system, and these improvements alongside native asset bundle loading brought significant gains.
Network latency, especially during the initial load, also impacted performance. To tackle this, the team switched from Unity coroutines to C# async/await for network calls, reducing latency. “We also optimized JSON deserialization by replacing Newtonsoft’s reflection-based approach with code generation, improving performance,” says Johnson.
Instantiation costs further affected initial load times, with a noticeable delay – about half a second on mid-range devices – due to the game creating many UI and background prefabs immediately after launch.

“A big part of the cost was heavy Awake() initialization on many objects,” Johnson explains. “We optimized instantiation by disabling objects upon creation and enabling them only when needed. We also deferred or staggered instantiation rather than doing it all at once, which reduced visible latency.”
To measure performance more precisely, the team used System.Diagnostics.Stopwatch to time instantiation intervals and instrumented various parts of the code.
Despite these optimizations, real-world network quality still impacts load times. To address this, the team is experimenting with lazy loading – shipping the game with all configs and allowing players to enter immediately while updating configs asynchronously in the background. “Early tests look promising, but we’re cautiously monitoring live results,” says Mathew.

During performance optimization, the team noticed that some lower-end devices showed minimal improvement. They discovered that these devices predominantly used Mali GPUs, which struggled to handle the game’s target item counts effectively. However, they were able to maintain the 60 fps target across all device tiers.
Weeks elaborates on these limitations: “Older third- and fourth-generation Mali GPUs are strict tile-based renderers that easily become geometry-bound. In contrast, many other GPUs use hybrid rendering techniques that help avoid this bottleneck. Newer Mali GPUs, like those found in recent Pixel phones, have more processing units and handle geometry much better.”

To better understand these constraints, Mathew built a tool early in development that instantiated the increasing numbers of objects on various devices to track frame rate drop-offs. The tool revealed that certain GPUs had lower thresholds, underscoring the need to optimize game models carefully without sacrificing art quality.
“In preproduction, the art director requested heavy use of post-processing effects and multiple dynamic lights,“ says Weeks. “However, we removed post-processing for production due to its significant contribution to device overheating and performance hits.“
Enney also boosted efficiency by baking a point light directly into shaders, reducing the number of active lights from three or four to just one.
This collaborative approach allowed the team to strike an optimal balance between visual quality and performance, ensuring a smooth experience across a wide range of devices.
Start creating games that compete with, and surpass the quality and success of big studio releases with help from powerful tools, support, verified partners, and a vibrant community.