I hate solving problems and not knowing how I did it, so this particular experience has stuck in my craw.
Trying to develop a budget to pay off debts, my partner made this elaborate Excel spreadsheet and the output was that basically she had no spending money and I had very little, until one or both of us got a raise. It was far more austere than either of us were willing to go. So I started over using a different equation for 'fairness', and a different layout because something about her tables was just confusing and messy. When I was done, we had $300 a month of extra spending money between the two of us, despite using the same targets for pay-down, savings and bills.
I spent about 90 minutes poking at her spreadsheet and mine looking for the error and never did find it. I don't know what the right solution is to this sort of problem, at least for non-developers. But if I was skeptical of Excel going in, I was doubly so after that. Especially for something that is going to be used to make decisions that will affect every day of your life.
You made two different spreadsheets, they came up with different results, and you're blaming Microsoft for the error? Really?
Excel, whether you like it or loathe it, is in such wide use around the world that $300 math errors would have been noticed a very long time ago. I could believe that there are still many lurking bugs with obscure corner cases, nasty floating point rounding minutiae and so on, but I would bet (checks spreadsheet) $300 that your mismatched results were solely due to your own failures.
I've re-read the parent comment a few times, and I really don't see how this is your takeaway. The author is complaining that they find Excel difficult to inspect and different spreadsheets difficult to compare. There's no claim that Excel did a calculation wrong.
But they aren't noticed. The classic blunder is dropping the last item off of a summation, and nobody notices until they add another item and things look janky.
Those happen all the time, we just only hear about it when the consequences are outsize.
In fact good practice is to add a check, just to see if your pivot table was refreshed and the data there matches the data in the source table. Just like you make tests in software.
I always store data in tables for this reason, but last week I ran into a problem because someone else had added new data to my sheet somehow without the table range being automatically adjusted. The table looked visually correct but when you sorted or filtered it, the rows towards the bottom were not affected. So sadly, this is not a silver bullet, but it does help a lot.
There's nothing wrong with Excel. Build an organized spreadsheet with clear separation of data and presentation, and write checks throughout and you will end up with a perfectly error-free workbook.
Nothing wrong?
Excel autoconverts values if it thinks it knows the data type of a column and Excel treats 1900 as a leap year.
Genetic researchers changed the names of genes to prevent auzoconveesion errors.
And JavaScript autoconverts all your numbers to floats, introducing errors every time you touch them. Meanwhile most other popular languages will happily wrap your fixed-size integers around on overflow, or silently perform integer division with truncation when you don't append a decimal fraction part manually to any of the numbers.
Point being, all tools come with strange caveats that one needs to be familiar with. A nice feature of Excel is that the caveats are visible, because unlike writing code, Excel is reactive and interactive.
I mean, some of the stuff Excel does to your data is downright idiotic wrt. the common use cases, and probably exists only for the sake of backwards compatibility. But let's not pretend you don't need to pay attention if using a tool you're not proficient with.
The typical users of javascript ate programmers, they know that errors.
The typical users of Excel most of the time aren't aware of Excels weaknesses. And good look finding these errors in large data sets.
There is a difference between visible and lucid, especially if you do not expect errors.
>>But let's not pretend you don't need to pay attention if using a tool you're not proficient with.
Decades of experience working in Office Environments tells me many many many many many people do not pay attention period, their experience with the tool is irrelevant.
My experience also shows that inexperienced people do not know WHAT to look out for, especially in excel so they are often more susceptible to mistakes.
This really comes into play when working in larger organizations where excel workbooks are passed around from person to person, often existing for years or decades at a time where people using the workbook are separated from the person that created the workbook.
Try figuring out an excel spreadsheet created 10 years ago by people no longer with the company that several data links importing data from all over the place.....
And a band saw will happily cut off your hand if you push it into the moving blade. That's why the best practices of operating one suggest that you never keep your hands anywhere near the cutting edge.
Excel's idiosyncrasies are very much on the level of the typical productive computing tool. They are less maddening than half the featureset of C++, three quarters of the featureset of Javascript, and 110% of the featureset of bash.
Despite that, people manage to get work done using bandsaws, C++, Javascript, and the occasional shell script.
Band saws, C++, Javascript have a different user group than Excel. Most of them have been trained on the tool they use.
If Excel were a band saw, there would be a lot more people missing fingers.
You can find idiosyncrasies about any tool. Excel is a fine tool for 99% of use cases. Sure, it's quirky here and there, but the sheer amount of work that gets done globally in Excel should be testament enough to its value. Highlighting edge cases to bash on the tool is either myopic or intellectually dishonest
1) X delivers tremendous value.
2) X has some significant problems.
3) X has no problems.
Obviously 1 and 3 are compatible. I claim 1 and 2 are also compatible (and, actually, not uncommon...).
You said that "[t]here's nothing wrong with Excel", and someone responded with an example of a problem that they considered significant. If we read "there's nothing wrong with Excel" as a strong claim of 3, then that's obviously a refutation of your claim. You could argue that the specific problems are not actually significant enough to rise to the level of notice. You could argue that they are not, in fact, problems at all. You could argue that you didn't, in fact, mean to make claim 3 in any strong sense (which I think is what was actually going on here - that's valid, English works that way).
You've instead interpreted it as a flawed refutation of claim 1 (and took the opportunity to demean your conversation partner). I don't think that's productive.
Excel is good at what it was developed for. Unfortunately, it is often used for other things. For example, in this case as a database. This leads to the fact that departments build solutions that work at first sight but have errors in special cases that are difficult to detect and debug.
I'm not so sure he claimed that Excel can't add and subtract numbers. I think the argument is that even two people can make such a mess of a spreadsheet that nobody can really untangle it later.
Tack on that developers have other, better options, but if you're not a developer I don't have a good solution for you, and that I don't like that state of affairs.
The OP said that using a spreadsheet program can be messy and prone to mistakes, and implied that there could exist alternative approaches. The OP never said a bug in Excel was responsible for the mistake or blamed Microsoft for corrupting the calculation.
In the same sense, one could say manual memory management in C can be messy and prone to mistakes, it doesn't mean that one believes C should never be used in any circumstance, and it doesn't mean one's trying to blame the C complier, libc or Dennis Ritchie for corrupting one's program.
Hypothetically, if someone writes the following reply.
> The C programming language, whether you like it or loathe it, is in such wide use around the world that a NULL pointer dereference would have been noticed a very long time ago. I could believe that there are still many lurking libc bugs with obscure corner cases, but I would bet the memory corruptions were solely due to your own failures.
The only real way to solve issues like this is a step-by-step reconciliation between the old model and new. Each assumption and calculation must be tested to confirm that they replicate the same output. This is extremely difficult if the underlying structure/assumptions are different (e.g. monthly views vs. transactional views, etc.).
That rework based on suspicion is typically how it’s done. :) Add to that subject matter expert review. For example, I used to work on a sheet of scoop size calculations for industrial digging and nothing from it would reach sales staff or customers until a couple of the crusty old experts played with it and vouched for the calculations.
Imperfect measures, but they keep the work within the grasp of everyday/business users in a way a formal test suite wouldn’t necessarily.
I didn't rework it because I thought the numbers were bad. She and I agreed to ground rules, [but] I could not commit to the consequences even though the 1st order consequences came out in my favor. The second-order consequences would be that I'd tricked her, intentionally or otherwise, which is decidedly unhealthy in a partnership. I had contributed 2/3rds of the income but 2/5ths of the debts (the latter being the main sticking point) so while 'fair' was not going to be 50:50, I'm not some monster who's going to buy high end electronics while my partner is eating ramen.
I expected worse news for myself after re-doing the numbers, and in fact I ended up with a little bit more spending money with the new numbers.
I mention this only because I have come to expect policy people to stick to their initial narrative a little more enthusiastically than I could. Getting them to review data and policy unless there is a clear benefit for themselves is difficult. It's very common for people to get promoted by challenging this friction in ways that benefit everyone (because the deciders couldn't connect the dots). It's always newsworthy, and I wish it weren't.
the secret to using Excel well in large complex tables/spreadsheets is the same secret as coding with large complex programs.
You use structure, design away the complexity, implement constraints, error checking and tests.
Note it's true that an inexperienced person is liable to make spaghetti code either way, but there's nothing fundamental about a spreadsheet format that makes it inherently unusable for a lot of small/medium problems. Indeed it even has benefits in terms of interactivity/ turn around/accessibility.
Of course, there's also a lot of problems with Excel and reasons not to use it like a database or anything which fundamentally relies on maintaining data integrity, and it's liable to be the first tool reached for by the non-experienced, who will generally make a mess of things large and complex as a rule.
> You use structure, design away the complexity, implement constraints, error checking and tests.
I've never thought of Excel as an ideal tool for any of these things. I'm struggling to think of how it would have change-verifications/tests in the same way that software projects do.
(I'm certainly open to the possibility that I'm ignorant/unaware on this subject)
depending on your point of view, you don't do them in the "same" way (I.e with separate unit tests or compiler enforced safety), and explicit difs are hard/impossible. And ideal is dependent on the task (it's ideal for me to get a working interactive graphic + visualisation to a user behind a corporate firewall via email in the same afternoon I get the request, I wouldn't actually choose to do 'programming' in Excel, but I consider a budget small and simple and programming is probably overkill)
what you do is much closer to old-school low level programming. define the relationships between your tables and variables well, set up explicit corresponding arrays of 1s and 0s that are themselves error checks on the underlying structure/ contents of your tables, calculate things in two different spots/ways and verify equivalence holds, and use simple red/green conditional formatting to draw attention to when things fall outside of expected state, etc.
This is precisely why I went with YNAB [0], started budgeting better and saved quite a bit of money.
Everyone who sees me paying for an app (I'm from India) ask me - "why can't you just use excel and do the same thing for free?". Excel sure is powerful, but in real life, your mileage may vary.
Please don't break the site guidelines with name-calling and personal attacks, regardless of how bad another comment is or you feel it is. We're trying to avoid a culture where people pick on others like that.
That exactly right, noone is.
Thats why noone uses C++ or javascript in safety critical devices
Even in 'normal' industry its is a huge problem and we need safer languages, i.e. Rust (merits to be seen)
Err... C & C++ are the primary languages used in typical safety-critical systems. I've used it.
JS... of course no one uses that mainly because the abstraction level is too high, the number of layers of non-safety-critical code massively outweighs the application code.
The critical thing about safety-critical code isn't really about the code... its the whole development process. Implementation is just one small part of that process.
> Thats why noone uses C++ or javascript in safety critical devices
Ooh, no that's fairly incorrect. Javascript for sure is less common because it's the web browser language, but C and C++ are used all the time in embedded safety critical designs.
I wish I could remember the movie (it appears to be a reference to Al Capone, of all people), but there's a scene that gave me the tingles. The leading man, about to exact some revenge, turns to the antagonist and says,
"You've mistaken my kindness for weakness."
If I worked at a mental health technician I'd be complaining about how bad the system is for trauma survivors and neuro-atypical individuals. But I'm not. I'm a developer who has focused on - or been forced to look at - people struggling with User- and Developer-Experience issues for much of my career. If I'm talking to you about your struggles, you get sympathy. If I'm talking about policy, or policy makers, I'm going to give you a taste of what my 'customers' have to go through every day. And I'm not going to feel a bit sorry about it. If you dismiss it, we may have a problem.
Trying to develop a budget to pay off debts, my partner made this elaborate Excel spreadsheet and the output was that basically she had no spending money and I had very little, until one or both of us got a raise. It was far more austere than either of us were willing to go. So I started over using a different equation for 'fairness', and a different layout because something about her tables was just confusing and messy. When I was done, we had $300 a month of extra spending money between the two of us, despite using the same targets for pay-down, savings and bills.
I spent about 90 minutes poking at her spreadsheet and mine looking for the error and never did find it. I don't know what the right solution is to this sort of problem, at least for non-developers. But if I was skeptical of Excel going in, I was doubly so after that. Especially for something that is going to be used to make decisions that will affect every day of your life.