We've been using Linear for a couple months. It's kind of like Superhuman in that the primary benefit is hotkeys. Otherwise it is an issue tracker.
I think the reason we see a steady stream of new issue trackers is that teams are trying to fix with software what are people problems.
New issue trackers feel faster for the same reason switching browsers tends to feel faster—you're getting rid of all of the crap you piled up in the old one. Don't migrate your backlog, start with only a couple engineers in a new issue tracker, and suddenly, wow!, this new tool is so much better!
But I don't think it really solves the problem. For most organizations, tracking tickets is a solved (by many products) problem. Starting with a new tool has the appearance of making things better, but leads to the same place. The problem is not the tool, it's the structure of the organization.
Slightly OT thought, but related to ticket tracking systems and the idea of reduced backlogs:
As we move more and more towards ubiquitous "Product Orgs," separate from engineering, I think we're seeing backlogs just explode in size at most places. People need to realize that a large engineering backlog has a lot of negative effects on the SDLC (& velocity) as a whole. I wish more people would embrace heavy-handed WIP limits, even for backlog.
But, since backlog is now the primary output of "product orgs" at many (less-than-great) companies, you now have an entire org whose jobs depend on not learning that lesson.
I don't know what the answer is, but I really am starting to hate this entire "product org" concept in general. It feels like we used to be able to just engineer our way around many of these challenges ("here, look what I built last week to justify my case"), or just get people into a room and talking. But once the decision channels become explicitly siloed off into a separate org with C-Suite-level autonomy, I'm not sure how you effectively bridge the gaps that form.
Any engineers who feel like having a separate product org has been a benefit to your company's product quality and delivery speed want to comment on how you see this sort of thing working effectively?
Depending on your company culture, this may be difficult to do, but I've seen good luck with moving toward a model where the product team doesn't push things into developers' to-do lists. The product folks supply a strictly prioritized wish list, and developers pull from it. Always one thing at a time, always the very first thing on the list.
It's the only way I've seen to get the product people to shift their thinking from, "What's a giant list of all the things we wish we could build?" to, "What is thing we actually need to build next?" And, without that change in thinking, the natural impulse is going to be to try and pack more features into the product using a close analogue to the technique that farmers use to produce foie gras.
You may have to be willing to sit back and let them drop the ball and get burned for it a few times. If they're pure business folks with little customer support experience, they might never personally understand how fixing a bug can be more valuable to the business than building a new feature, if you don't give them a chance to talk to a user who's irate about some piece of deferred maintenance.
How do you deal with the possible disconnect between product and engineering in terms of feasibility? If #1 on the list is something that will about absorb all of engineering for years, but #2-5 can be developed in weeks each?
What kind of scope level do you think works when talking about that list? Like project level instead of tasks, and then engineering figures out how to deliver the prioritized project? Otherwise it kinda just sounds like a backlog to me, just curious how you see the difference there.
Also, I've seen product orgs that focus a lot of energy on not being burned, regardless of the scenario. They use their position at the crux of many communication channels to subtly shift blame around. Which is relatively easy to do when your role is complex and many-faceted.
It's going to invariably depend on how your company is organized, and its culture.
If you're at all able to start with something approaching the business-facing parts of Scrum, I think that's a great place to start. In Scrum, there are ideally two completely separate queues: The product backlog belongs to the business folks, and is organized in a way that works for them, and the dev team's list of tasks belongs to dev, and probably shouldn't look similar at all. And the only way for things to move from one queue to the other is during a regular meeting where the dev team says, "OK, it looks like we've got room to take on X additional work, what can you give us?"
And then, and I think this is secretly one of the most important things, the dev team physically creates new tickets for the new work they're taking on. You should never be able to just drag-and-drop tickets from one queue to the other. Ideally, it shouldn't even be physically possible to do so. Where I am right now, one is in $JIRA_COMPETITOR, and the other is an Excel spreadsheet. (If we all worked out of the same office, I'd probably go for an Excel spreadsheet and yellow sticky notes on a wall. Yellow sticky notes on a wall are awesome. They're a great deterrent to keep the PHBs from generating reports and dashboards for bothering people with KPIs.)
This little ceremony, and the associated firewall between two related but separate business functions, is, I think the heart and soul of what Scrum should be. It's the thing that sets up all the little firewalls and incentives that encourage business and dev to maintain a relationship that's more functional than dysfunctional. Everything else - sprints, story points, standup meetings, all that jazz - is just window dressing that some teams may or may not find makes that ceremony go more smoothly.
And if you can get a formal handoff like that in place, and get people to respect it, my (admittedly hopelessly idealistic) belief is that the rest of it gets easier to figure out. What's the correct scope and size of individual requests that the product people make of the dev team? Whatever size and scope best enables people to come out of the meeting feeling good about how it went.
But if everything's being jammed into one gimongous ticketing system, like all the ticketing system vendors seem to want you to do, you're doomed.
I prefer a single queue with simple rules of who can do what.
The approach I learned at Pivotal Labs was that product management decides the order of stories and bugs, since these add or subtract user value. Engineering can put chores anywhere in the backlog according to their considered discretion, since these reduce drag and risk. Each week you have the Iteration Planning Meeting to look at what's coming up. Each day engineering pulls whatever's on the top of the backlog and works on it until it's done.
If I am getting the drift right here, the root of the problem is that the Product Org is not prioritising well, and putting an upper limit on the total story points pushed in a release ?
That would lead to a large number of stories being classified as "Priority" and left to engineering to figure out how to deliver.
Separate Product Orgs, like or hate it, are not going away anytime soon. If anything, I believe they will become more ubiquitous.
Probably what we need is not more and better ticketing system, but better prioritisation systems, with well defined upper limits on how many story points can be released at once.
Thanks, this is a neat idea to think about. You helped me start anchoring some untethered thoughts I've had floating around about this stuff recently.
>like all the ticketing system vendors seem to want you to do
It's so annoying how misaligned the incentives are between these vendors and the users. Standard fare for B2B products focusing on the people signing the checks though...
> Yellow sticky notes on a wall are awesome. They're a great deterrent to keep the PHBs from generating reports and dashboards for bothering people with KPIs.
Wow! You should buy a .io domain name throw up a landing page and start selling this as a feature! (On a subscription model, naturally.) Seriously though I love this point.
> As we move more and more towards ubiquitous "Product Orgs," separate from engineering
I've only seen this separation recently and I am not a fan. It just seems like a terrible wasteful way to consume time, money and goodwill. I've never seen anything more effective than small balanced teams.
yes. engineers almost always make poor product decisions if the customer is not also an engineering team, in my (vast) experience. this is because engineers aren't product experts. and if the customer is an engineering team, then engineers mostly make mediocre decisions, typically because of poor incentives (gaming story points).
in every company i've been at that is larger than a handful of people that all know each other, a separate product org is a requirement if you want a product with broad appeal.
i'm not sure what your complaint about the product backlog is. product people should be generating a large backlog. that is literally their job, is it not? it's only through a large backlog that you can say, hey we need more engineers. or hey, our vision doesn't match our capabilities -- refine it. and so on.
How exactly does a large backlog have a negative effect on velocity?
>engineers almost always make poor product decisions
You seem to be misinterpreting my thoughts as being against the idea of a product manager role. That's not what I mean. I'm talking about product as a separate autonomous org with a CPO, versus embedded PMs reporting to EMs alongside the rest of the product development team. I have similar, though less strong, feelings about Design as a separate org as well, for many of the same reasons (communication overhead/bottlenecks).
With regard to backlog, there is actually some capacity planning math that can show that (unintuitively) backlog size negatively affects product delivery velocity. I'm not an expert there, I've just watched some internal tech talks about it in the past.
At a higher level, think about what happens when the business side requests a new feature, or a product change. What is the difference in terms of confidence of estimates that can be given if you have 20 tasks in the backlog, versus 200. Sure, for that one thing you can up the priority and jam it in above a bunch of other stuff. But then, once that starts happening to people where you prioritize a newer request above theirs, how does that affect their confidence in your planning ability on a longer timeline?
> in my (vast) experience
Lastly, maybe pump the brakes a bit here, boss. This statement doesn't make me trust you more, independent of your ideas. We're on a web forum, not in a board room.
>Lastly, maybe pump the brakes a bit here, boss. This statement doesn't make me trust you more, independent of your ideas. We're on a web forum, not in a board room.
literally made me lol, thanks for that. i mean it in a good way, not a defensive sarcastic way. that particular phrase is funny. can't recall the movie that made it popular but it forces a smile.
i'm not trying to gain your trust. like 95% of posters, i just like the sound of my own voice. i could not care less what you do with the info ... and i shouldn't care. it's opinionated. if you agree with it, you already agreed with it. it's not like i'm trying to convince you that JWT is bad. i have no agenda.
i was just using the adjective to note that this is what i've seen over a long time in this industry. AFAICT most of HN is 25-somethings with hardly a clue so i felt it useful to note that my opinion doesn't come from the 2 jobs i've held since graduating. that's all. i wasn't trying to speak from a position of authority, so as to command respect.
i understand though why you reacted as such, as in retrospect it sure looks like the typical kind of comment, i'll admit that.
as to the rest of your reply, yup i misunderstood your argument.
as to priority churn, that's not a problem of the backlog per se, that's a function of poor management, or the simple state of being of an early startup. whether you have an actual backlog or not, the whiplash priorities are the problem and the backlog merely the embodiment. by removing the backlog, you haven't addressed the problem.
Great point! Compartmentalisation is one thing but in large orgs, everyone starts focusing on one metric they need to boost (backlogs or PRs or design mocks) and we have no idea how this will affect other teams. We just have good metrics to show.
Having an entire separate org whose sole job is to create backlog items inevitably ends up with 100s of items, 15-30% of which are "high" priority. Many of these will have out of date specs by the time engineering can pull them in.
I would much rather have product folks embedded in the engineering team. That way they can be working on making sure any issues on deck to be worked on next have up to date, clear requirements, and issues are prioritized correctly.
Having product people separated into their own organization almost always leads to a "throw it over the wall" situation.
You're not wrong about incentives, but an org where PMs have fully segregated duties from devs is exactly the organizational context that breeds disconnected engineers juicing their burndown chart. I mean what do you think is the predictable outcome from a PM whose main job is googling his competitors and stuffing high fantasy JIRA tickets in the engineering backlog based on his mystic customer divinations. Neither of these people has made contact with a customer in their lifetime.
however that's not a [dys]function of simply having a product org, that's a problem with poorly qualified, poorly performing employee. or a poorly describe job function. usually with the boss/management similarly poorly equipped since they don't see past the metrics they themselves (usually) have established for "success".
i think like the GP, you are conflating poor execution of the concept/model with the deficiencies of the model itself. one does perhaps encourage the other, however it doesn't need to be that way and isn't inherent.
i mean, in 3 words, "haters gonna hate". meaning that people that are poor at execution are going to be poor at it no matter the model in which they are embedded. but when you do have a good product team, the product org (as a model) gives them the ability to maximize a less than ideal engineering team. of course, still there's many a slip twixt cup and lip.
What I wish is that companies had people specialized in handling issue tracking systems. Expecting developers to do this is absurdo, a lot of the work is just making sure that the ticket is not a duplicate, and similar issues.
On some level, I kind of wish the entire cottage industry of "issue tracking systems" would just go away. What's wrong with a simple Kanban board? What has ever been wrong with that? It seems to me like, in a lot of cases, engineers just lost their nerve in terms of confidently talking to sales people and the business side of the company, and couldn't deal with the (imo, healthy) tension any more. I understand that people want estimates and timelines and stuff like that, and those are totally reasonable. But flagellating ourselves over absurdly small timelines, and hand-wringing over tiny pieces of features of projects as the core driver for engineering productivity, just seems crazy to me. It seems a lot more like purely signalling to the business side some vague ideas about "commitment," or "hustle," or whatever without having to speak to "those obnoxious sales people."
By product org do you mean a silo of an organisation dedicated to one particular product (so if a company has five "apps" they have five siloes and five marketing teams five engineering teams and five board members ?
I understand it to mean that there is a part of the org (separate from the engineering team) that shapes the direction of the product; decides "these are the features we're going to implement, these are the issues we're going to address"; one way of looking at it is that their primary output is putting things in to the engineering team's backlog.
Aha - so you are saying there is a product manager (plus "org") whose job it is to produce big backlogs separate from the people who will implement them.
I tend to agree but I have a anecdotal counterexample. We recently switched from JIRA to Clubhouse by importing our history and backlog wholesale. Due to the substantial performance bump we have definitively leveled up as a team.
Status updates that used to happen in slack channels are now captured in Clubhouse, people log in to check on progress, stories get love and details.
JIRA cloud was unusably slow and as a result we didn’t want to use the tool. We hated it even though it technically “worked”.
I was a JIRA admin for a number of years for an organization that had a heavy QA workflow. It was all about making sure that every issue had an "owner," at any given time, a strict approval and verification process was followed, and that as much up-front data as possible was collected during the initial report.
In my experience, unless the people entering the issues are paid, professional engineers, we'll never get the information we need up front.
I have found that the basic GitHub model works well for me.
When someone reports a bug, they are doing me a favor. It's in my interest to ensure there are as few roadblocks as possible to them sending a report. If they give contact info, then I can contact them with specific questions.
In my experience, a simple email form is the best way to solicit reports. It may have a workflow hidden behind it, but I have found that the simpler the workflow is, the more likely it is that the issue tracking will work.
JIRA basically starts off complicated, and it's possible to make it much, much worse.
It's also slow, but that wasn't really the problem we had. The workflows and forms were where it fell down for us.
As a current Jira admin, I constantly have to fight this. A non dev team came into Jira and implemented a huge octopus of a workflow (despite my strong advice against it) and were shocked to find out it didn't really help them work better.
The handful of teams inside that department that I've gotten to switch over to a simpler To Do / In Progress / Done style workflow have been much happier.
Jira has many, many faults, but most of them are what the users do to themselves.
Jira is laggy as fuck and makes every interaction I have with it take 5 times as long as it needs to, or at least it has been in the two organizations that I've worked for that used it.
We're in what I tend to think of as Jira's pit of despair: there's no explicit support for the on-prem instance; workflow configuration regularly seems to go wrong and has a wide blast radius when it does; nobody trusts the data so we don't get any useful reporting out of it; the UI is... not exactly slow, but I wouldn't call it snappy either; and there's no money for addons that would genuinely help us.
Github Projects would be absolutely fine for us, but we can't switch out because Jira is "what everyone uses" and "has more features", despite us not really using any of them.
A lot of issues can be mitigated when you self host vs using JIRA Cloud, not just because you might have beefier hw and better network link. While I suspect it got better, periodic reindexing and similar cron jobs were a hell on a team separated by 9 hour timezone difference, because time chosen for convenience in San Francisco meant many Mondays of "Atlassian Cloud is down" for team in Warsaw.
While we're all in the same timezone in our company, we're also programmers... so that doesn't always matter. I've been using our JIRA in the middle of the night on several times, and it's just... there.
edit: so yeah just highlights the self-hosting vs cloud...
Not sure what happened to you - but speaking anecdotally, but from someone who logs into Jira Cloud every day, and spends 20-30 minutes interacting with the interfaces over 8-10 hours a day, and has done so for the last 3 years - it's had a handful of outages, nothing that really rose to the level of being memorable though (and less than any internal issue tracker I've worked with) - and the performance is, fine? I mean, it's not <50ms twitch fast, so, I guess I'm losing a minute or do wall-clock time + whatever larger blocks of time that occur when I get distracted waiting for a ticket to come up - but it isn't at all what I would call "unusably slow".
In our case though, we only have about 50 engineers and a few hundred thousand issues. Jira has performance issues in the 500 engineer / 100mm+ issue space - or it used to, perhaps they've resolved those issues by now.
Ours JIRA instance took multiple seconds to load modals. Think 5+ seconds. It was miserable.
I would have gladly paid more for faster speed but it turned out the APIs were blazing fast and it was the JS that was slow. Not sure how we have had such different experiences. Seems impossible that your JS was executed that much faster than mine. I was using chrome FWIW.
I actually got $work to pay for a new desktop computer to prevent me from going insane because of the slugish JIRA and Google Cloud Web Console, and it actually solved it.
My old x260 was not able to run jira/gcp/slack at a reasonable speed due to javascript performance :(
The main problem with Windows 95 stability was garbage third party drivers, but we had no problem blaming Microsoft for that.
Really, why wouldn’t you blame the maker of the bed?
If you make a tool that encourages Rube Goldberg machines, then you made something far worse than a Rube Goldberg machine; you built a device to construct them.
> But can be usable if you are not super sensitive to small latency.
Our JIRA instance would lag while typing in the description box. You have to wait a second or two for it to catch up after typing a sentence. It was absolutely unusable.
5+ seconds sounds wild. Our active users number is a multiple of the 5k max users for jira cloud and the only actions i can think of that are >2s are bulk edits and complex jql
I haven't noticed JIRA cloud being that slow, but maybe that's because the team is small.
Did you start to experience performance issues as team size increased? Did you have a lot of issues or data held in the issues? Or was there another issue that you think led to the slowness?
I'm interested in your insights as we went with JIRA initially, since it was pretty much the gold standard and we thought it would scale nicely as we added team members, but if it is not going to do so, then it may not be worth the JIRA premium.
I don't want to put words in OP's mouth, but I suspect that OP was referring to just like... "you click and something and then wait" type of slow. Something like, taking several seconds to load an issue.
In my case, we were a small team (<10 users, <1k total issues split in maybe 8 projects), and were running into random load time slowness irritations. It never stopped us from doing what we needed to do, but it sure did make it more frustrating to do it. We are using a cloud instance.
It’s a documented phenomenon in UX that if an activity involves a quantity of small pauses in a short enough time span, that the user perceives them as a single, long pause.
If your app is fast but your workflows are terrible, people will call you slow because everything takes four interactions. If you have a bad workflow and a slow-ish app, people will talk about you on the Internet.
I do agree that lot of the process issues often reside in the organization itself.
In ideal world, you don't need any kind of tracking or process. Right things just happen. However, in reality is that companies often have goals and roadmaps they have to deliver. Things move fast and teams need to know what is being worked on. We've worked in 5 and 5000 person engineering organizations and need for coordination is always there. And personally, I just need to know what I need to do next.
We spent the past year talking with our our early users. New startups and even teams of very successful companies, struggle with the process. They might know how they should improve things, but it's hard to find the time or energy to actually implement it. We want to help with this.
So in addition to the tool, we are developing the understanding and practices software help the team to focus and reduces some of the bad habits we might have. The tool will then help teams implement and maintain these practices going forward. We ourselves also been working with this "Linear Method" which we shared here: http://linear.app/linear-method
Boo to this take. Not all tools are the same and “Good tools obviate bad processes.” is a mantra I saw once in a talk that has stuck with me for 15 years. You’re not wrong that it’s a people problem, but tools shape human behavior. That’s Design.
The quote is attributed to Michael B. Johnson, aka wave, Software Director at Pixar.
API design is a great example of this. Good API design incentivizes the right choices and clean integration. I often argue that it's not just "oh this API matches the spec better" or "this API is lower maintenance" but rather that I've found certain APIs can almost trick developers into writing good code. It's a neat experience.
That being said, there is some necessary internalization that a tool is solving a people problem which is maybe what OP is trying to get at. No matter how good an API is, if a developer is dead set on writing bad code, it'll be bad. So just deploying a new tool, no matter how clever, won't fix an organization that doesn't want to be fixed.
I haven't tried Linear, but my company uses Jira. It's abysmally slow. Not "there are too many tickets and it's hard to manage" slow, just the interface is slow. I opened an issue link in a new tab. It took about 7 seconds for the page to become interactive (able to click on the issue status and get a dropdown) and about 9 seconds for the loading to complete and elements to stop jumping around. This is in Firefox 78 on a Lenovo T480 with 8th-gen Core i7 CPU. Speed.cloudflare.com reports 28.5ms latency with 38.1ms jitter, 339Mbps down. Not a generally slow setup. At least in the case of Jira the interface speed is a huge issue.
Is there a site that lets you run performance profiles in browser developer tools and share the results? I could imagine that being really useful internally for devs troubleshooting and working with support.
It'd be harder to pull off as a way to shame poor performance because of the need to compare like with like, but I think it's needed to get the attention of folks.
Accessibility checkers are often rudimentary, but they are available to non-technical folks in companies and can be very effective at getting a conversation started.
The other problem cycle is “we’ll get JIRA right this time”. This is followed by happy clicky people immediately shafting it, at least two years of suffering and then some PM selling some new clothes to the emperor. Everything degrades this way. Even GitHub the moment someone adds workflow automation. Intent versus causality are two very different things.
My favourite problem is a JIRA instance that sets a resolution as incomplete immediately on a ticket, which is a valid resolution so all my tickets are closed as far as it’s concerned. Ugh.
I’m slowly working on an OSS replacement for all of this hell which is designed for inflexibility from an organisational perspective. It’ll be open source. I’m sure no one will use it because it’s inflexible.
"I think the reason we see a steady stream of new issue trackers is that teams are trying to fix with software what are people problems."
Imagine what would happen if everyone in an organisation had to use the same tool for text and code editing, say for some reason the PhB inflicted Word on the organisation as the only allowed code editor. Would that be good?
From my pov, most of the churn with ticket trackers is down to current solutions trying to own everything and apply a one size fits all model to everyone in the enterprise. The upshot being that the ticket trackers are invariably a sh*tty compromise for many/most individuals in the organisation. Think about the number of slides and reports that are hand generated for progress meetings in an enterprises. What about all of the various todo apps and systems people use? Why would so much effort be expended if the enterprise ticket tracking system was meeting requirements?
Fixing the problem requires using systems based on common standards that allow _individuals_ to use what works best for them but still interoperates freely with each other. fwiw, another shiney issue tracker will not do that unless its basic unit of tracking is something akin to a text file that can be shared with the next shiney issue tracker.
It's not just speed, it's complexity. I've seen the cycle play out enough to know the real thing that drives how productive a ticketing system feels: How many distinct kinds of people are using it.
The new one always seems great, because whatever team pilots it is the only team using it, so they're able to keep it nice and lean and closely adapted to their own needs. But, as soon as the decision to migrate is made, then you don't just have additional teams using it. You have additional teams wanting to use it as a communication and monitoring channel, and building up reports and metrics and dashboards on top of it, and imposing restrictions on the ticketing system that ultimately limit other groups' ability to streamline their own workflows, and by the time everyone realizes they're back on the same old pain train yet again, it's too late to do anything about it.
It seems that developers are always the ones who get hit the hardest, because they end up with the largest number of outside parties who want to get involved in their business.
> Don't migrate your backlog, start with only a couple engineers in a new issue tracker...
I experienced this in the wiki/documentation space some years ago when my org migrated from the trac wiki to Confluence. It seemed like such a boost initially, but it wasn't a boost at all, it was just the fact that for the first ~12 months after migrating, you could trust that everything you found in the wiki was freshly written and up to date (and the overall corpus was small enough that the search function actually returned useful results).
The whole experience has given me the idea of making a Confluence plugin that slowly fades out each page to grey as a visual indication of its waning relevance. Anyone editing, commenting, or "liking" the page immediately freshens it again, unless they somehow indicate that the action is actually a vote against (like nofollow is for links). When a page is sufficiently abandoned, it goes on a candidate-for-deletion list, gets a warning box at the top, and is blacklisted from appearing in the default search results.
If it’s late afternoon and I’ve finished or am about to start a particularly challenging task, I’ll tend to take on other tasks like basic research or wiki maintenance.
Essentially I treat the wiki like a B-Tree. When the landing page gets crowded, I move a bunch of related things down one level in the hierarchy, trying to retain keystone ideas within the first to levels (and letting related concepts be discoverable from those).
Over time the oldest links and links of only temporary utility get pushed out to the leaf nodes, where you aren’t likely to encounter them accidentally.
Really, though, we should be applying the same sort of version control we have in our source code. You should be able to track down how old a paragraph is in the wiki.
https://www.getguru.com/ has a so verification where each page, called cards, has an owner or group of owners and a set expiration period (1 week, 3 months, 1 year etc.). Past the expiration the page gets marked as untrusted (it's on top of each card) and require to be reviewed by the owner(s). Any user can also mark a card as untrusted, prompting a review. Every expired card owned by owner get displayed to their owners in a dedicated UI (page) and can be quickly reviewed. The editor in Guru is a bit clunky at time (worst than Confluence... if that's possible) and not really suitable for something like specs, but I have yet to find anything that actually tries to prevent wiki rot.
> New issue trackers feel faster for the same reason switching browsers tends to feel faster—you're getting rid of all of the crap you piled up in the old one. Don't migrate your backlog, start with only a couple engineers in a new issue tracker, and suddenly, wow!, this new tool is so much better!
Sluggish software can be more than just a data bloat issue. It can also be caused feature bloat, or software architecture that scales poorly.
Anecdotally, I recently worked on a project using a fresh Jira Cloud account. It was still slow.
We do a lot of work at Clubhouse to keep things fast.
We have alerts set up that fire if the p50 or p95 of certain actions spike and we treat it as a bug if things are headed in the wrong direction.
We still see a lot of random cases of things being slower though (shakes fist at random browser plugin upgrades).
It's a hard problems, but if you don't design (and monitor) for this from the beginning things are going to slow down over time as you scale to hundreds of thousands of users.
Jira is so slow I installed plugins in both the devtools I use a lot to view tickets without having to use the web interface, it's just painful when you have to view a lot of tickets quickly.
It's the same with communications tools. "Our email is an INBOX-trillion mess because we use unstructured email rather than structured tickets to manage our processes. Slack is the answer!". Six months later, they have the exact same problem with Slack because the communications patterns haven't changed.
That said, some tools are really just abysmal, and richly deserve the hate directed at them.
> New issue trackers feel faster for the same reason switching browsers tends to feel faster
It's not necessary about faster process, but Jira is agonizingly slow: each click triggers a cascade of HTTP requests in a synchronous way and it literally takes seconds every time you want to do something. Everytime I have to touch it, I feel angry for the next hour just because of the induced frustration. Any tool more responsive would be an improvement for me.
I'm genuinely interested: is this really true? Is Jira necessarily slow when there's a lot of stuff in it? I'm not sure I believe it. There are graph-of-objects-plus-metadata applications out there that are pretty fast. I'm not an expert in this domain, but I bet an issue tracker can be made that doesn't feel painful at real scales.
I didn't interpret the OP to mean slow from an "application performance" perspective - instead, as slow from a "process" perspective.
The ticket needs to be filed in the right queue. It needs the "right" tags. It needs to traverse the "approved" workflow correctly. It didn't get assigned to the right epic. etc, etc, etc.
It's all of that - the process built up over time, in response to organic needs - that makes things "slow" and drives such a rejection of these tools by developers and their teams. We (collectively) rarely step back and say "what process can we cut?" - only when we use a new tool do we have the chance to address that.
It's both. JIRA can often be terribly slow. Each page loads about 10 seconds for me, and I don't know why. Other people say it is ~2s for them. I suspect it has something to do with networking and amount of requests being made, etc.
Then, on top of it the flows in Jira can be super slow, which is another story.
What's your question here? The premise of the OP is that they created a fast issue tracker.
People complain about Jira being slow (me being one of them.) No one has claimed that all issue trackers are slow. ClickUp, Hive, Linear, are all snappy.
The comment I replied to said that trackers are slow because they are full of stuff, and when you try a new tracker it feels fast because of lack of stuff.
I think you might be misinterpreting him. When I read that, "full of stuff" meant old tickets, process that you had to go to, etc. Slow =/= app performance but general cruft.
I don't think he literally meant that the app's performance was slowed down by the volume of tickets. I don't think Jira's performance gets much worse even with 10,000s of tickets in it.
The baseline performance of the web app is just baseline bad though.
"New issue trackers feel faster for the same reason switching browsers tends to feel faster—you're getting rid of all of the crap you piled up in the old one. Don't migrate your backlog, start with only a couple engineers in a new issue tracker, and suddenly, wow!, this new tool is so much better!"
I think the issue with issue trackers is that every organization has it's own way of tracking issues. So issue trackers often try to do everything for everyone, and end up being at most mediocre for most users.
Is Superhuman any good for that matter? I refuse to have to talk to a sales person on the phone in order to use a mail app. Which is what their signup flow required last time I tried.
I think the reason we see a steady stream of new issue trackers is that teams are trying to fix with software what are people problems.
New issue trackers feel faster for the same reason switching browsers tends to feel faster—you're getting rid of all of the crap you piled up in the old one. Don't migrate your backlog, start with only a couple engineers in a new issue tracker, and suddenly, wow!, this new tool is so much better!
But I don't think it really solves the problem. For most organizations, tracking tickets is a solved (by many products) problem. Starting with a new tool has the appearance of making things better, but leads to the same place. The problem is not the tool, it's the structure of the organization.