Your Jira Upgrade Will Break 3 Things. Here’s How to Find Them First
A practical pre-upgrade checklist for Jira Data Center administrators who’ve been burned before (or don’t want to be).
Jira 11 is here. And with it comes Java 21, Spring 6, Jakarta EE 10, the death of Trusted Apps authentication, and a jQuery major version bump that will break every custom UI fragment you forgot you had.
If you’re on Jira 10 thinking “we’ll upgrade when we’re ready” — you have until the LTS 11.3 support window runs out (December 2027) to make that decision. That sounds like plenty of time. But every admin who’s done a major Jira upgrade knows: the upgrade itself takes a day, but finding out what broke takes a week.
Real stories from the field
One community member shared that their 396,000-issue instance was offline for 57 hours after a major version upgrade — because a known reindex bug combined with default settings nobody checked.
Another organization tried Zero Downtime Upgrade across major versions, only to discover it’s not supported — their cluster went down anyway, but now with a confused state.
I’ve been building tools for Jira DC admins for years, and every major upgrade cycle the pattern repeats. Plugins silently break. Performance degrades. The load balancer locks everyone out — including the admin trying to fix it.
So here’s what actually breaks during Jira DC upgrades, and how to find these problems before they find you.
The 3 Things That Break
Every upgrade failure I’ve seen falls into one of three categories:
- Your apps — plugins that worked yesterday won’t work tomorrow
- Your data — orphaned records, corrupted workflows, schema surprises
- Your infrastructure — reindex timeouts, load balancer lockouts, NFS stale mounts
Let’s go through each one with specific checks you can run today.
1 Your Apps Will Break (and Some Won’t Tell You)
Apps are the #1 cause of upgrade failures. Not sometimes — consistently.
The silent killer pattern
An incompatible plugin in the installed-plugins folder can prevent Jira from starting at all. No error page. No admin console. Just… nothing. The only fix is emptying the plugins folder and reinstalling apps after Jira starts.
But that’s the obvious case. The worse scenario is when apps appear to work but silently malfunction. After one community member upgraded, the bundled Automation for Jira plugin started consuming extreme CPU. The upgrade “succeeded,” but the system was essentially DoS-ing itself.
What to check
Use the built-in compatibility checker:
Go to Administration > Manage Apps > scroll to Jira Update Check at the bottom. Select your target version and click “Check.”
- Apps showing “Unknown” aren’t necessarily incompatible — it often means the vendor hasn’t updated their Marketplace listing
- “Automation for Jira” will show as “Incompatible” when checking Jira 10.x — this is a false positive
- Apps that changed from third-party to bundled (Insight, Advanced Roadmaps) often show incorrectly
ScriptRunner deserves special attention
If you use ScriptRunner, know that Groovy 4 introduced breaking changes that will silently break your scripts:
| Old code | Must become |
|---|---|
import groovy.util.XmlSlurper |
import groovy.xml.XmlSlurper |
import groovy.util.slurpersupport |
import groovy.xml.slurpersupport |
Also check for these deprecated Jira API patterns (not Groovy 4 specific, but commonly found in legacy scripts):
| Old code | Must become |
|---|---|
ComponentManager.getInstance() |
ComponentAccessor |
searchResults.getIssues() |
searchResults.getResults() |
Empty objects in array lists (“a”,”b”,,”c”) are no longer allowed either. You need explicit empty strings (“a”,”b”,””,”c”).
Export ALL ScriptRunner scripts before upgrading. Then search them for these patterns. Every. Single. One.
The upgrade strategy that works
- Before upgrading, disable all third-party plugins
- Upgrade Jira first
- Re-enable and upgrade plugins one at a time
- After each re-enable, check atlassian-jira.log for errors
- Run a full reindex only after all plugins are updated
This is slower than upgrading everything at once. That’s the point — you isolate which app caused which problem.
What Jira 11 changes for apps
This isn’t a minor version bump. Jira 11 introduces platform-level breaking changes that affect most third-party apps:
Spring 6 and Jakarta EE 10 migration
This is the big one. javax.* packages become jakarta.*. Every app that touches servlets, filters, or dependency injection needs a rewrite — not a tweak, a rewrite.
Some vendors have reported spending months trying to comply with Atlassian’s new requirements due to the scope of changes.
- Trusted Apps authentication removed entirely. If your integrations use Trusted Apps (common in older enterprise setups), they will stop working. Migrate to OAuth 2.0 with impersonation before upgrading.
- jQuery upgrade from 2 to 3. Every custom UI fragment, ScriptRunner behavior, and plugin that touches the frontend may break. jQuery Migrate removal is planned for Jira 12, so fixing jQuery 3 compatibility now avoids a second round of breakage.
- LESS web-resource transformer removed. Plugins using LESS runtime transformation will fail in Jira 11. The complete block on all plugins requiring transformation is planned for Jira 12.
- Global serialization filter. New blocklist for Java deserialization, Velocity, Struts, and XStream. Apps that relied on these serialization paths will fail silently.
- Apps must support Platform 6, 7, AND 8 simultaneously through 2026. This means some vendors may deprioritize Jira 11 support if their user base is still mostly on 10.x. Check with your vendors explicitly.
2 Your Data Has Problems You Don’t Know About
Every Jira instance accumulates data integrity issues over time. They’re harmless during normal operations but can cause upgrade failures when the database schema changes.
The queries that save upgrades
Run these before you upgrade. Each one takes seconds but can save hours of troubleshooting.
Check for broken workflow entries
Issues stuck in invalid states will cause problems during schema migration:
SELECT jiraissue.id, jiraissue.pkey,
os_wfentry.*
FROM jiraissue
JOIN os_wfentry ON jiraissue.workflow_id = os_wfentry.id
WHERE os_wfentry.state IS NULL
OR os_wfentry.state = 0;
If this returns results, fix them before upgrading:
UPDATE os_wfentry SET state = 1 WHERE id IN ( SELECT os_wfentry.id FROM jiraissue JOIN os_wfentry ON jiraissue.workflow_id = os_wfentry.id WHERE os_wfentry.state IS NULL OR os_wfentry.state = 0 );
Check for orphaned custom field configurations
These won’t cause a visible error, but they slow down the upgrade process and can cause 500 errors post-upgrade:
SELECT * FROM fieldconfigscheme fcs WHERE NOT EXISTS ( SELECT 1 FROM configurationcontext cc WHERE cc.fieldconfigscheme = fcs.id ) AND fcs.fieldid LIKE 'customfield_%';
Check for orphaned issue links
Links pointing to deleted issues:
SELECT il.* FROM issuelink il LEFT JOIN jiraissue src ON il.source = src.id LEFT JOIN jiraissue dst ON il.destination = dst.id WHERE src.id IS NULL OR dst.id IS NULL;
Check for projects without permission schemes
Can cause access issues after upgrade:
SELECT * FROM project WHERE id NOT IN ( SELECT source_node_id FROM nodeassociation WHERE sink_node_entity = 'PermissionScheme' );
Check for issues with NULL status
These will definitely break during upgrade:
SELECT jiraissue.id, jiraissue.issuenum,
jiraissue.issuestatus, jiraissue.project
FROM jiraissue
JOIN os_currentstep currentStep
ON jiraissue.workflow_id = currentStep.entry_id
WHERE jiraissue.issuestatus IS NULL;
Know your numbers
Before upgrading, know exactly what you’re dealing with:
-- Total issues (critical for reindex time estimation)
SELECT count(*) FROM jiraissue;
-- Attachment volume per project (identifies bloat)
SELECT count(*),
SUM(fa.filesize) / 1000000 as total_MB,
p.pname, p.pkey
FROM project p
JOIN jiraissue j ON p.id = j.project
JOIN fileattachment fa ON fa.issueid = j.id
GROUP BY p.pname, p.pkey
ORDER BY total_MB DESC;
-- Custom field values per project (heavy projects slow reindex most)
SELECT count(*), p.pname, p.pkey
FROM jiraissue i
JOIN project p ON i.project = p.id
JOIN customfieldvalue cfv ON i.id = cfv.issue
GROUP BY p.pname, p.pkey
ORDER BY count(*) DESC;
3 Your Infrastructure Will Surprise You
Even if your apps are compatible and your data is clean, the infrastructure layer has its own set of traps.
The reindex reality
Major version upgrades typically require a full reindex. Starting with Jira 11.2, OpenSearch is fully production-supported as a search backend (Lucene public APIs are being deprecated, though Lucene itself remains supported for now), which means the search infrastructure is changing fundamentally.
Reindex time benchmarks
- Default thread count: 20 (can be increased significantly)
- Increasing to 50 threads reduced one admin’s reindex from 50 minutes to 10 minutes — a 5x improvement
- Reindex is very I/O intensive — SSD/NVMe storage makes a dramatic difference
- Antivirus software scanning index files can slow reindex dramatically (exclude $JIRA_HOME from AV scanning)
Before upgrading, check this setting: Administration > System > Advanced Settings > look for the reindex thread count. If your hardware can handle it, increase the number of threads. Test in staging first.
Reindex stuck at 0%?
In DC clusters, this is almost always a stale NFS mount. The mount worked fine before the upgrade — the reboot during upgrade is what triggers it. The only fix is a server reboot. Check your NFS mount options. Recommended:
rw,nfsvers=4.1,lookupcache=pos,noatime,intr,rsize=32768,wsize=32768,_netdev
The load balancer lockout
Since Jira 8.19.1, when indexes are unhealthy, Jira responds “MAINTENANCE” to the /status endpoint. If your load balancer uses /status for health checks (it probably does), it will block all traffic — including your admin access.
You literally cannot log in to fix the problem that’s preventing you from logging in.
Configure a bypass connector in Tomcat’s server.xml that lets you access the admin console directly, bypassing the load balancer. Do this before you need it.
The Zero Downtime Upgrade trap
ZDU is great for minor version updates (10.1 to 10.3, for example). But it does not work between major versions (10.x to 11.x). If you try it, the upgraded node refuses to start with: “Jira does not support zero downtime upgrades between major releases.”
Your entire cluster has to go down for the 10.x to 11.x upgrade. No shortcuts.
Post-upgrade performance: the invisible failure
Some upgrades “succeed” but performance degrades silently:
- entity_property table: After any upgrade, database index statistics may be outdated, causing the query optimizer to choose terrible execution plans. Run UPDATE STATISTICS (SQL Server) or VACUUM ANALYZE (PostgreSQL) on this table immediately post-upgrade.
- Automation for Jira CPU spike: The bundled plugin can consume extreme CPU after upgrade. Disabling it returns CPU to normal. Re-enable after verifying the rest of the system is stable.
- Basic Auth disabled by default in Jira 11. Your monitoring scripts, API integrations, and CI/CD pipelines that use basic authentication will silently stop working. They won’t fail loudly — they’ll just get 401 responses that may or may not be logged.
None of these show up in upgrade logs. You only notice when users start complaining — or when your Monday morning dashboard is empty.
The Pre-Upgrade Checklist
Here’s the condensed version. Print this. Pin it. Follow it every time.
Weeks before
- Read ALL release notes between your current and target version (budget 1-2 hours)
- Check the upgrade matrix for supported paths
- Verify Java 21, database, and OS compatibility
- Run app compatibility check (Manage Apps > Jira Update Check)
- Contact vendors for any apps showing “Unknown”
- Export all ScriptRunner scripts and search for breaking patterns
- Review your last upgrade retrospective
Days before (in staging)
- Set up a full production replica — not a subset
- Run all SQL integrity checks from Section 2
- Run built-in Integrity Checker (Administration > System > Integrity Checker)
- Run Health Checks (Administration > Troubleshooting and support tools)
- Clean up orphaned data
- Clean up automation audit log if it has grown large
- Perform the upgrade in staging — measure exact time for each phase
- Run the upgrade a second time (reset staging, apply fixes, validate clean process)
Day of (production)
- Announce maintenance window (staging time x 1.5-2x)
- Disable email notifications
- Take database backup with native database tools (not Jira XML backup)
- Take file system backup or VM snapshot
- Back up server.xml, setenv.sh, and customized configs (use diff, don’t copy old over new)
- Stop Jira on all nodes
- Start upgrade on one node only
- Monitor: tail -f <jira_home>/log/atlassian-jira.log
- After startup, re-enable and upgrade plugins one by one
- Run full reindex (with increased thread count if tested)
- Verify: create issue, transition workflow, run a JQL search, check a dashboard
- Re-enable email notifications
If it fails
Don’t troubleshoot on a broken production system. Roll back immediately, then debug in staging. Rolling back means:
- Stop Jira
- Restore database from backup (native tools)
- Restore installation and home directories
- Start Jira on the old version
What’s New in Jira 11: The Full Breakage List
Here’s everything that changes when you upgrade from Jira 10 to 11. This isn’t meant to scare you — it’s meant to give you a checklist of what to verify.
Platform changes
| Change | Impact | Action required |
|---|---|---|
| Java 21 required | Java 17 no longer supported | Install JDK 21 on all nodes |
| Spring 6 / Jakarta EE 10 | javax.* becomes jakarta.* |
Verify all apps support new namespace |
| jQuery 2 to 3 | Breaking changes in selectors, events, AJAX | Test all custom UI code and plugin frontends |
| Tomcat APR/Native removed | APR protocol implementations gone | Switch to NIO connector if using APR |
Authentication and security
| Change | Impact | Action required |
|---|---|---|
| Basic Auth disabled by default | API calls with user:password stop working | Migrate to personal access tokens or OAuth 2.0 |
| Trusted Apps removed | Inter-app authentication broken | Migrate to OAuth 2.0 with impersonation |
| Global serialization filter | Blocklist for Java deserialization | Test integrations that pass serialized objects |
Search infrastructure
| Change | Impact | Action required |
|---|---|---|
| OpenSearch production-ready (from 11.2) | New production-supported search backend | Plan migration to OpenSearch (recommended) |
| Lucene public APIs deprecated | Direct Lucene API calls will break | Migrate custom search code to new Search API |
Plugin and UI changes
| Change | Impact | Action required |
|---|---|---|
| LESS transformer removed | Plugins using LESS runtime transformation fail | Compile LESS to CSS at build-time (full block in Jira 12) |
| AUI Dropdown 1 / Toolbar 1 removed | Custom admin panels using old components break | Update to Dropdown 2 / Toolbar 2 |
| JSP runtime compilation disabled | Custom JSP pages silently stop rendering | Remove or migrate custom JSPs |
Lessons From Admins Who’ve Done This Before
These tips come from Atlassian Community veterans and Jira experts like Rodney Nissen (The Jira Guy) and Rachel Wright (Strategy for Jira). They’re not in official docs, but they should be.
1. Never copy old config files over new ones
Don’t take your Jira 10 server.xml and drop it into the Jira 11 directory. Atlassian makes structural changes to these files between versions. Instead, use diff to compare your customized files against the new defaults, then manually apply your changes to the new files.
2. Run the upgrade twice in staging
First run: upgrade, find what breaks, document fixes. Second run: reset staging, apply all fixes beforehand, run a clean upgrade. If the second run succeeds without intervention, you’re ready for production.
3. Budget 1-2 weeks for end-user testing
Admins can verify technical success, but only users find the workflow that silently lost a transition or the dashboard that stopped loading.
4. Target upgrades every 6 months
Don’t exceed 1 year between upgrades. Longer gaps mean more accumulated breaking changes and exponentially harder upgrades. The jump from 9.4 to 11.3 is a different beast than 10.3 to 11.3.
5. For production failures: roll back first, debug second
Do not troubleshoot a broken production system with 500 users waiting. Roll back to the working version immediately, then reproduce the failure in staging where you have time to think.
6. Write upgrade retrospectives
After every upgrade, document what broke, what took longer than expected, and what you’d check next time. Your future self will thank your past self.
The Real Lesson
The admin whose reindex took 57 hours didn’t lack skill. They lacked a 10-minute pre-flight check. The organization that got locked out of their own Jira didn’t have bad infrastructure. They had a load balancer that did exactly what it was configured to do.
Upgrade failures aren’t about competence. They’re about the gap between what you assume will work and what you actually verified.
Run the checks. Measure in staging. Have a rollback plan that someone other than you has reviewed.
Your upgrade will go fine. The 10 minutes you spend checking is cheaper than the 57 hours you’ll spend recovering.
Run These Checks Without SSH
Home Directory Browser lets you browse Jira logs, database tables, and configuration files through a web UI — no terminal access required.
- Run pre-upgrade SQL queries directly from your browser
- Browse log files for errors without SSH
- Check configuration files and compare against expected values
- Find problems before they find you