For people finding this thread via web search in the future:
screen.studio is macOS screen recording software that checks for updates every five minutes. Somehow, that alone is NOT the bug described in this post. The /other/ bug described in this blog is: their software also downloaded a 250MB update file every five minutes.
The software developers there consider all of this normal except the actual download, which cost them $8000 in bandwidth fees.
To re-cap:
Screen recording software.
Checks for updates every five (5) minutes. That's 12 times an hour.
I choose software based on how much I trust the judgement of the developers. Please consider if this feels like reasonable judgement to you.
ryandrake 6 hours ago [-]
Yea, it seems like the wrong lesson was learned here: It should have been "Don't abuse your users' computers," but instead it was, "When you abuse your users' computers, make sure it doesn't cost the company anything."
infogulch 5 hours ago [-]
That's a good summary and explains many ills in the software engineering industry.
ljm 5 hours ago [-]
$8000 for 2 petabytes of traffic is pretty cheap for them also.
There are plenty of shitty ISPs out there who would charge $$ per gigabyte after you hit a relatively small monthly cap. Even worse if you're using a mobile hotspot.
I would be mortified if my bug cost someone a few hundred bucks in overages overnight.
benwilber0 6 hours ago [-]
> their software also downloaded a 250MB update file every five minutes
How on earth is a screen recording app 250 megabytes
pixl97 5 hours ago [-]
Because developers can suck.
I work with developers in SCA/SBOM and there are countless devs that seem to work by #include 'everything'. You see crap where they include a misspelled package name and then they fix it by including the right package but not removing the wrong one!.
PeeMcGee 5 hours ago [-]
The lack of dependency awareness drives me insane. Someone imports a single method from the wrong package, which snowballs into the blind leading the blind and pinning transitive dependencies in order to deliver quick "fixes" for things we don't even use or need, which ultimately becomes 100 different kinds of nightmare that stifle any hope of agility.
xxr 4 hours ago [-]
In a code review a couple of years ago, I had to say "no" to a dev casually including pandas (and in turn numpy) for a one-liner convenience function in a Django web app that has no involvement with any number crunching whatsoever.
whstl 2 hours ago [-]
Coincidentally, Copilot has been incredibly liberal lately with its suggestions of including Pandas or Numpy in a tiny non-AI Flask app, even for simple things. I expect things to get worse.
hooverd 11 minutes ago [-]
There's a ton you can do with sqlite, which is in the Python standard library. You just have to think about it and write some SQL instead of having a nice Pythonic interface.
AndrewStephens 4 hours ago [-]
>> their software also downloaded a 250MB update file every five minutes
> How on earth is a screen recording app 250 megabytes
How on earth is a screen recording app on a OS where the API to record the screen is built directly into the OS 250 megabytes?
It is extremely irresponsible to assume that your customers have infinite cheap bandwidth. In a previous life I worked with customers with remote sites (think mines or oil rigs in the middle of nowhere) where something like this would have cost them thousands of dollars per hour per computer per site.
mobilemidget 5 hours ago [-]
Or.. Why on earth you need to check for updates 288x per day. It sounds and seems more like 'usage monitoring' rather than being sure that all users have the most recent bug fixes installed. What's wrong with checking for updates upon start once (and cache per day). What critical bugs or fixes could have been issued that warrant 288 update checks.
pcthrowaway 3 hours ago [-]
A 250MB download should be opt-in in the first place
hulitu 1 hours ago [-]
> A 250MB download should be opt-in in the first place
I've read on HN that a lot of people have 10Gb Ethernet at home. /s
mobilemidget 49 minutes ago [-]
I got 8 :)
zoky 30 minutes ago [-]
Do you mean 8 homes with 10Gb Ethernet, or 1 home with 8 10Gb Ethernet connections?
absolutelastone 3 hours ago [-]
They probably just combined all phoning home information into one. Usage monitoring includes version used, which leads to automatic update when needed (or when bugged...).
f1shy 4 hours ago [-]
What's wrong with checking for updates upon start once (and cache per day)
For me that would also be wrong, if I cannot disable it in the configuration. I do bot want to extend startup time.
dontlikeyoueith 3 hours ago [-]
Wait until you learn about non-blocking IO. And threads.
It's a whole new world out there.
tough 3 hours ago [-]
If you're expecting the guys shipping the 250mb bloated app to get this right i might haave a bridge to sell you
partdavid 5 hours ago [-]
It sounds right, and this is the kind of thing I'd expect if developers are baking configuration into their app distribution. Like, you'd want usage rules or tracking plugins to be timely, and they didn't figure out how to check and distribute configurations in that way without a new app build.
hulitu 1 hours ago [-]
> they didn't figure out how to check and distribute configurations in that way without a new app build.
Any effort to use their brain shall be drastically punished. /s
latexr 5 hours ago [-]
As I recall, it’s an Electron app. I just checked and the current version of Google Chrome is 635 MB, with its DMG being 224 MB.
So yes, it’s insane, but easy to see where the size comes from.
tough 3 hours ago [-]
Tauri has been a thing for a while, it baffles me people still choose Electron without a good reason to do so.
Also webapps are just great nowadays most OS support install PWA's fairly decently no?
ffs
nsingh2 3 hours ago [-]
Tauri is not as platform-agnostic as Electron is because it uses different web views depending on the platform. I ran into a few SVG-related problems myself when trying it out for a bit.
For example, on Linux, it uses WebKitGTK as the browser engine, which doesn't render the same way Chrome does (which is the web view used on Windows), so multi-platform support is not totally seamless.
Using something like Servo as a lightweight, platform-independent web view seems like the way forward, but it's not ready yet.
Izkata 3 hours ago [-]
> Tauri is not as platform agnostic as Electron is
Screen recording straight from a regular browser window, though it creates GIFs instead of video files. Links to a git repo so you can set it up locally.
brooke2k 1 hours ago [-]
seconded -- tried to use tauri for a cross-platform app but the integrated webview on android is horrendous. had to rewrite basic things from scratch to work around those issues, at which point why am I even using a "cross-platform" framework?
I imagine if you stick to desktop the situation is less awful but still
lvass 2 hours ago [-]
>on Linux it uses WebKitGTK
It's about time Linux desktops adopt some form of ${XDG_WEB_ENGINES:-/opt/web_engines} convention to have web-based programs to fetch their engines as needed and play nice with each other.
hulitu 1 hours ago [-]
It has: /dev/null /s
tough 3 hours ago [-]
Thanks, didn't knew about Servo, hopefully we'll get there Electron really is bloated and any app using it eats my ram whatever how much of it i have
ranger_danger 2 hours ago [-]
> Also webapps are just great nowadays most OS support install PWA's fairly decently no?
I would say no, and some are actively moving away from PWA support even if they had it before.
Plus, electron et al let you hook into native system APIs whereas a PWA cannot, AFAIK.
ericmcer 5 hours ago [-]
The app itself is probably much bigger than 250mb. If it is using Electron and React/other JS library like a million other UIs just the dependencies will be almost that big.
aziaziazi 5 hours ago [-]
Just my hypothesis: some softwares includes video tutorial accessible offline. A short but not-compressed-high-res video can easily go big.
256_ 3 hours ago [-]
It was probably written by the type of programmers who criticise programmers like me for using "unsafe" languages.
rat9988 3 hours ago [-]
You probably deserve to be criticized if you think this is the culprit.
asmor 3 hours ago [-]
"How can I make this about me and my C/C++ persecution complex?"
lawgimenez 6 hours ago [-]
I don’t use their software but if someone has they should be able to decompile it.
iends 6 hours ago [-]
It's an electron app.
ranger_danger 3 hours ago [-]
I would bet money it's electron
homebrewer 2 hours ago [-]
It's probably their way of tracking active users without telling you so, so it makes a lot of sense to "check for updates" as frequently as possible.
zahlman 5 hours ago [-]
Not everyone even has an Internet connection that can reliably download 250MB in 5 minutes.
Yes, even in metropolitan areas in developed countries in 2025.
Hikikomori 4 hours ago [-]
Even doable on very long range ADSL, guess there are still some dialup users.
mlyle 4 hours ago [-]
That's 6.5 megabits/second, plus overhead. Many DSL circuits exceed this, but not all.
Retric 1 hours ago [-]
Most DSL I’ve seen has been way slower than 6.5 megabits/s. If you’re that close to infrastructure you can likely get cable etc.
1.5megabits/s is the still common, but Starlink is taking over.
zahlman 4 hours ago [-]
Not dialup. Just bad last-mile wiring, as far as I can tell.
Apparently such service is still somehow available; I found https://www.dialup4less.com with a web search. Sounds more like a novelty at this point. But "real" internet service still just doesn't work as well as it's supposed to in some places.
mr_toad 3 hours ago [-]
My current AirBnB has only cellular backed WiFi which would struggle to download 250MB at peak times.
3 hours ago [-]
ranger_danger 2 hours ago [-]
I struggle to get close to 6mbps on good days... some of us are still stuck on DSL monopolies.
f1shy 4 hours ago [-]
Germany?
zahlman 4 hours ago [-]
Canada. But yes, I've heard the stories about Germany, and Australia too.
In point of fact, I can fairly reliably download at that rate (for example I can usually watch streaming 1080p video with only occasional interruptions). The best case has been over 20Mbit/s. (This might also be partly due to my wifi; even with a "high gain" dongle I suspect the building construction, physical location of computer vs router etc. causes issues.)
arvindh-manian 3 hours ago [-]
Obviously five minutes is unnecessarily frequent, but one network request every five minutes doesn't sound that bad to me. Even if every app running on my computer did that, I'm not sure I'd notice.
hulitu 14 minutes ago [-]
> but one network request every five minutes doesn't sound that bad to me
Even if it is made to CIA/GRU/chinese state security ? /s
bredren 5 hours ago [-]
Little Snitch catches these update request checks and I realize now that it should have an additional rule meta which is *how often* this endpoint request should be allowed (LS should allow throttling not just yes / no)
tough 3 hours ago [-]
murus+snail?
outsidein 4 hours ago [-]
Microsoft InTune WUDO has a similar bug costing my department 40000 € internal charging per month for firewall log traffic of blocked tcp 7680 requests. 86000 requests per day per client, 160 million per day total. MS confirmed the bug but did nothing to fix it.
hulitu 16 minutes ago [-]
> MS confirmed the bug but did nothing to fix it.
They are building features right now. There are a lot of bugs which Microsoft will never fix, or it fixes them after years. (Double click registered on mouse single clicks, clicking "x" to close the window, closes also the window underneat, GUI elements rendered as black due to monitor not recognized etc).
skirge 3 hours ago [-]
how? Do you investigate each blocked packet as separate alert?
outsidein 1 hours ago [-]
Yes, all packets get logged (metadata only). Otherwise we wouldn’t know there is an issue.
Those packets consume bandwidth and device utilization, too but this is flat fee, whereas log traffic is measured per GB so we investigated where an unexpected growth came from.
vrosas 6 hours ago [-]
When I built an app that “phones home” regularly, I added the ability for the backend to respond to the client with an override backoff that the client would respect over the default.
gblargg 39 minutes ago [-]
Seems like the proper fix would have been to remove the file from the server when they realized the increased traffic. Then clients would just fail to check the update each time and not tie up bandwidth.
nyarlathotep_ 2 hours ago [-]
Wish people would actually do things like this more often.
Plenty of things (like playstation's telemetry endpoint, for one of many examples) just continually phones home if it can't connect.
The few hours a month of playstation uptime shows 20K dns lookups for the telemetry domain alone.
SnorkelTan 4 hours ago [-]
Why not just use http retry-after? then you can use middleware/proxy to control this behavior. Downside here is that system operation becomes more opauqe and fragmented across systems.
vrosas 3 hours ago [-]
Because the client in this case is not a browser.
aziaziazi 5 hours ago [-]
Could you expend on what is an "override backoff" ?
ses1984 5 hours ago [-]
The client might have a feature to retry certain failures, and it’s using a particular rate, probably not retrying n times one right after the other in rapid succession. This is called backoff.
The server can return an override backoff so the server can tell the client how often or how quickly to retry.
It’s nice to have in case some bug causes increased load somewhere, you can flip a value on the server and relieve pressure from the system.
vrosas 2 hours ago [-]
Exactly. Without going too deep into the architecture, the clients are sending data to the backend in real time, but often that data is not actionable during certain periods, so the backend can tell the clients to bundle the data and try again after a certain amount of time, or just discard the data it's currently holding and try again later (i.e. in 5/10/n seconds)
aziaziazi 1 hours ago [-]
Thanks for your responses. I’m used to "throttle", seems to be a synonym right?
vrosas 20 minutes ago [-]
sure, you could say throttle.
treyd 5 hours ago [-]
Presumably the back end could tell the client not to check again for some amount of time. Sounds similar but different to cache TTLs, as those are passive.
Tade0 5 hours ago [-]
Several months ago I was dealing with huge audio interruption issues - typical sign of some other, blocking, high-priority process taking too long.
Turns out Adobe's update service on Windows reads(and I guess also writes) about 130MB of data from disk every few seconds. My disk was 90%+ full, so the usual slowdown related to this was occurring, slowing disk I/O to around 80MB/s.
Disabled the service and the issues disappeared. I bought a new laptop since, but the whole thing struck me as such an unnecessary thing to do.
I mean, why was that service reading/writing so much?
crazygringo 1 hours ago [-]
Every 5 minutes is too often yes, but it hardly matters for a tiny HTTP request that barely has a body.
So yes it should only be once a day (and staggered), but on the other hand it's a pretty low-priority issue in the grand scheme of things.
Much more importantly, it should ask before downloading rather than auto-download. Automatic downloads are the bane of video calls...
VWWHFSfQ 6 hours ago [-]
I would be so embarrassed about this bug that I would be terrified to write it up like this. Also admitting that your users were forced to download 10s or 100s of gigabytes of bogus updates nearly continuously. This is the kind of thing that a lot of people would just quietly fix. So kudos (I guess) to blogging about it.
senordevnyc 5 hours ago [-]
[flagged]
KronisLV 5 hours ago [-]
> To re-cap: Screen recording software. Checks for updates every five (5) minutes. That's 12 times an hour.
The tone might be somewhat charged, but this seems like a fair criticism. I can’t imagine many pieces of software that would need to check for updates quite that often. Once a day seems more than enough, outside of the possibility of some critical, all consuming RCE. Or maybe once an hour, if you want to be on the safe side.
I think a lot of people are upset with software that they run on their machines doing things that aren’t sensible.
For example, if I wrote a program that allows you to pick files to process (maybe some front end for ffmpeg or something like that) and decided to keep an index of your entire file system and rebuild it frequently just to add faster search functionality, many people would find that to be wasteful both in regards to CPU, RAM and I/O, alongside privacy/security, although others might not care or even know why their system is suddenly slow.
mieko 5 hours ago [-]
For contrast: Chrome, a piece of software which has a huge amount of attackable surface area, and lives in a spot with insane stakes if a vulnerability is found, checks for updates every five hours, last I read.
turtlebits 5 hours ago [-]
Noone is commenting on the actual bug. The fact that it auto downloads 250mb updates is user-hostile. On top of that, checking every 5 minutes? What if I'm on a mobile connection?
Why not just follow every Mac app under the sun and prompt if there's an update when the app is launched and download only if the user accepts?
f1shy 4 hours ago [-]
I think the critique here is not directed to 1 individual, the guy who actually wrotw the code. That would be ok, can happen. Here we are talking about the most valued company in the world, which hopefully has many architects, designers and literally an army of testers… and then make such a brutal error.
abstractspoon 7 hours ago [-]
I find it ludicrous that the developers of an app as insignificant as a screen recorder would think it's necessary to check for updates every 5 minutes.
Once a day would surely be sufficient.
smallpipe 6 hours ago [-]
I make CPUs for a living. I'm happy these people exists, we'll always need faster CPUs.
VWWHFSfQ 6 hours ago [-]
The big clouds love these people too. So much of the software industry is just an outrageous combination of inexperience and "YOLO". Every problem can be solved by just giving AWS another $100,000 this month because we don't have time (and don't know how) to make even basically optimized software. So just burn the gas and electricity and give more money to the YAML merchants at Amazon.
999900000999 5 hours ago [-]
That was the promise of "The Cloud".
Data centers are big and scary, no body wanted to run their own. The hypothetical cost savings of firing half the IT department was too good to pass up.
AWS even offered some credits to get started, first hit's free.
Next thing you know your AWS spend is out if control. It just keeps growing and growing and growing. Instead of writing better software, which might slow down development, just spend more money.
Ultimately in most cases it's cheaper in the short term to give AWS more money.
Apart of me wants to do a 5$ VPS challenge. How many users can you serve with 5$ per month. Maybe you actually need to understand what your server is doing ?
I'm talking non sense, I know.
sombrero_john 5 hours ago [-]
> Instead of writing better software, which might slow down development, just spend more money.
Except this is unironically a great value proposition.
ryandrake 2 hours ago [-]
We are throwing everything under the bus, including the user's battery, CPU, memory, bandwidth, the company's cloud costs and energy usage, just so developers can crap out software slightly faster.
skull723 5 hours ago [-]
Not really. I run several web applications on one 15$ VPS. Although the user count is <5. But I think it would need quite a lot of users for the resource usage to go up to a critical level.
nyarlathotep_ 2 hours ago [-]
> outrageous combination of inexperience
Correction--many have years of inexperience. Plenty of people that do things like this have "7 years designing cloud-native APIs".
whstl 2 hours ago [-]
Oh, cloud native. For a few years people used to look at you funny if you were ...gasp... using battle-tested open source software instead of the overpriced AWS alternative. I'm so glad we're finally seeing pushback.
ngruhn 5 hours ago [-]
> the YAML merchants at Amazon
I lost it
rvz 4 hours ago [-]
> Every problem can be solved by just giving AWS another $100,000 this month because we don't have time (and don't know how) to make even basically optimized software.
Don't forget the Java + Kafka consultants telling you to deploy your complicated "micro-service" to AWS and you ending up spending tens of millions on their "enterprise optimized compliant best practice™" solution which you end up needing to raise money every 6 months instead of saving costs as you scale up.
Instead, you spin up more VMs and pods to "solve" the scaling issue, which you lose even more money.
It is a perpetual scam.
EvanAnderson 7 hours ago [-]
> Once a day would surely be sufficient.
Weekly or monthly would be sufficient. I'd also like "able to be disabled manually, permanently" as an option, too.
ryandrake 6 hours ago [-]
How about never? If I want to update my software, I'll update it. I don't need the application itself to hound me about it, at any frequency.
pixl97 5 hours ago [-]
Because historically your average user will not update the software and then some worm is going about causing massive damage all over the internet.
EvanAnderson 4 hours ago [-]
This is overblown fear mongering, especially for desktop apps.
There are only a few applications with exposed attack surface (i.e. accept incoming requests from the network) and a large enough install base to cause "massive damage all of the Internet". A desktop screen recorder app has no business being constructed in a manner that's "wormable", nor an install base that would result in significant replication.
The software that we need the "average user" to update is stuff like operating systems. OS "manufacturers" have that mostly covered for desktop OS's now.
Microsoft, even though their Customers were hit with the "SQL Slammer" worm, doesn't force automatic updates for the SQL Server. Likewise, they restrict forcing updates only to mainstream desktop OS SKUs. Their server, embedded, and "Enterprise" OS SKUs can be configured to never update.
mgkimsal 6 hours ago [-]
Hrm... might depend on the purpose of the update. "New feature X" announcements every few days... I hate and disable. "Warning - update now - security bug"... I want to be notified of those pretty quickly.
hennell 4 hours ago [-]
Ironically the only real call for an update check every 5 mins would be so you can quickly fix a problem like everyone downloading the update every 5 mins.
pcthrowaway 2 hours ago [-]
> Once a day would surely be sufficient.
Well they might need to rush out a fix to a bug that could be harmful for the user if they don't get it faster.
For example, a bug that causes them to download 250MB every 5 minutes.
chris_va 5 hours ago [-]
I generally find that these things are put in during development, and then people forget to take them out.
VladVladikoff 5 hours ago [-]
I honestly lost so much respect for the author after reading this that I completely bailed on the article. Every 5 minutes update check is entirely unhinged behaviour.
ljm 3 hours ago [-]
Why pay for your own runners when you can do CI/CD on your users’ machines?
dist-epoch 7 hours ago [-]
You can use that as a hidden way of tracking how many active users you have at any time.
Good way of showing adoption and growth.
panki27 6 hours ago [-]
That's stretching my definition of "good" quite a bit.
mystified5016 6 hours ago [-]
You can still do that with daily, weekly, or monthly checks.
Nobody under any circumstances needs usage stats with 5 minute resolution. And certainly not a screen recorder.
closewith 6 hours ago [-]
You definitely can, although it would be unlawful under the GDPR without user consent, so you could never release the figures publicly.
ahtihn 4 hours ago [-]
It wouldn't if you're not tracking user identity.
Websites get this data pretty much by default and they don't need consent for it.
closewith 3 hours ago [-]
If you're deduplicating via IP or any other identifier, then it will be subject to the requirement for a legal basis.
Yeri 7 hours ago [-]
a thousand times this.
amelius 7 hours ago [-]
one time should be sufficient
jve 14 hours ago [-]
> Screen Studio is a screen recorder for macOS. It is desktop app. It means we need some auto-updater to allow users to install the latest app version easily.
No, it doesn't mean that.
Auto updater introduced series of bad outcomes.
- Downloading update without consent, causing traffic for client.
- Not only that, the download keeps repeating itself every 5 minutes? You did at least detect whether user is on metered connection, right... ?
- A bug where update popup interrupts flow
- A popup is a bad thing on itself you do to your users. I think it is OK when closing the app and let the rest be done in background.
- Some people actually pay attention to outgoing connections apps make and even a simple update check every 5 minutes is excessive. Why even do it while app is running? Do on startup and ask on close. Again some complexity: Assume you're not on network, do it in background and don't bother retrying much.
- Additional complexity for app that caused all of the above. And it came with a price tag to developer.
Wouldn't app store be perfect way to handle updates in this case to offload the complexity there?
HelloNurse 13 hours ago [-]
App store updates are perfect: no unnecessary complications, no unnecessary work (assuming Screen Studio is published and properly updated in the app store), and the worst case scenario is notifications about a new Screen Studio version ending up in a Screen Studio recording in progress.
Thinking of it, the discussed do-it-yourself update checking is so stupid that malice and/or other serious bugs should be assumed.
ryandrake 6 hours ago [-]
Exactly. The AppStore already exists and does updates (either automatically or manually, configurable by the user). The developer didn't have to lift a finger to get this functionality. Imagine sitting down and spending time adding functionality to your application that is already provided for free by the operating system, and then after all that, doing it incorrectly!
HelloNurse 6 hours ago [-]
Starting from the paid developer accounts, the Apple app store isn't "provided for free by the operating system" and it is a source of endless busywork, fear and suffering, but the argument stands: a professional Macintosh software vendor uses the app store because Macintosh users expect it, so it can be assumed that "properly" publishing new software version to the app store is a sunken cost that should be made as useful as possible.
ryandrake 5 hours ago [-]
By "provided for free" I mean the App Store comes with the OS, costs nothing (monetarily) to the developer over the existing annual Apple Developer Program fee, which pretty much all macOS developers pay anyway, and can be counted on to exist on all macOS installations.
Telemakhos 5 hours ago [-]
> malice and/or other serious bugs should be assumed
Going back to the blog post and re-reading it with this possibility in mind is quite a trip.
> It turns out thousands of our users had the app running in the background, even though they were not using it or checking it for weeks (!). It meant thousands of users had auto-updater constantly running and downloading the new version file (250MB) over and over again every 5 minutes
This could easily have been data exfiltration from client computers instead, and few (besides the guy whose internet contract got cancelled for heavy traffic) would have noticed.
bearjaws 6 hours ago [-]
Yeah no, publishing to the App Store is a nightmare in cost and time. I can 100% guarantee they still saved money on 30% fees even after this $8000 snafu.
Screen Studio has 32k followers, lets say 6% are end users, 2000 users at $229, that is $137k in App Store fees.
I am going to say writing your own app update script is a wash time wise, as getting your app published is not trivial, especially for an app that requires as many permissions as screen studio.
skinner927 6 hours ago [-]
Some people don’t like using the AppStore. I like to keep backups of installers so I can control the version. And if it gets pulled from the AppStore, I’ll always have a copy.
Nition 11 hours ago [-]
While we're listing complaints... 250MB for a screen recorder update?
yojo 7 hours ago [-]
That’s pretty much the floor for an Electron app.
If you’re a small shop or solo dev, it is real hard to justify going native on three platforms when electron gives it for (near) free. And outside of HN, no one seems to blink at a 250MB bundle.
There are alternatives like Tauri that use the system browser and allow substantially smaller bundles, but they’re not nearly as mature as Electron, and you will get cross platform UI bugs (some of which vary by user’s OS version!) from the lack of standardization.
pcthrowaway 2 hours ago [-]
> And outside of HN, no one seems to blink at a 250MB bundle.
Please, many people connect to the internet via a mobile phone hotspot, at least occasionally.
This bug would likely cause you to go through your entire monthly data in a few hours or less.
yojo 1 hours ago [-]
I’m not excusing this bug. There are several poor decisions that went into this issue, but my contention is that using electron (with the resulting 250mb bundle) is not one of them.
You should probably not roll your own auto-updater.
If you do, checking every 5 minutes for updates is waaaay too often (and likely hurts battery life by triggering the radio).
And triggering a download without a user-prompt also feels hostile to me.
The app size compounds the problem here, but the core issue is bad choices around auto-updating
rafram 6 hours ago [-]
This app is Mac-only, which makes the choice to use Electron a little confusing.
yojo 6 hours ago [-]
That is… surprising.
I’d actually seen this project before because the author did a nice write up on using React portal to portal into electron windows[1], which is something I decided to do in my app.
I’d just assumed his was a cross platform project.
> And outside of HN, no one seems to blink at a 250MB bundle.
I can remember when I would have to leave a 250MB download running overnight.
Before that, I can remember when it would have filled my primary hard drive more than six times over.
... Why can't the app-specific code just get plugged into a common, reusable Electron client?
yojo 3 hours ago [-]
Different versions of electron bundle different versions of chromium. There can/will be rendering differences between them.
Tauri is an alternative framework that uses whatever web view the OS provides, saving ~200mb bundle size. On Mac that’s a (likely outdated) version of Safari. On Windows it’ll be Edge. Not sure what Linux uses, I’d guess it varies by distro.
The promise of Electron (and it’s an amazing value prop) is that your HTML/JS UI will always look and work the same as in your dev environment, no matter what OS the host is running.
I don’t have the time or inclination to test my app on the most recent 3 releases of the most popular operating systems every time I change something in the view layer. With Electron, I trade bundle size for not having to do so.
I do think alternatives like Tauri are compelling for simple apps with limited UI, or where a few UI glitches are acceptable (e.g. an internal app). Or for teams that can support the QA burden.
jasonjmcghee 4 hours ago [-]
You mean like WebKit which Tauri uses?
yojo 3 hours ago [-]
I go into more detail in a sibling comment, but Tauri does not provide a standardized web runtime. The webview you get depends on your OS and OS version. They’re all “WebKit”, but definitely do not all render the same. I have built a Tauri app and switched to Electron after encountering multiple x-plat rendering bugs.
And even when nothing changed?!? Fucking lazy developers aka "I have an idle ≥1Gb/s pipe to the download server". What happened to rsync/zsync/zstd (with dictionary)? There are so many good tools freely available to reduce wasted bandwidth when you insist on reinventing the wheel. sigh
amelius 6 hours ago [-]
As a user I hate auto updates. It feels like someone pulling the rug from under me.
socalgal2 14 hours ago [-]
Does the app store handle staged rollouts?
That was a thing I thought was missing from this writeup. Ideally you only roll up the update to a small percent of users. You then check to see if anything broke (no idea how long to wait, 1 day?). Then you increase the percent a little more (say, 1% to 5%) and wait a day again and check. Finally you update everyone (who has updates on)
dahcryn 11 hours ago [-]
yes obviously something as mature as the App store supports phased rollout. I believe it is even the default setting once you reach certain audience sizes. Updates are always spread over 7 days slowly increasing the numbers
djxfade 12 hours ago [-]
Yes it does support this
dist-epoch 7 hours ago [-]
> Wouldn't app store be perfect way to handle updates
But then the HN crowd would complain "why use an app store? that's gate keeping, apple could remove your app any day, just give me a download link, and so on..."
You literally can't win.
wqaatwt 6 hours ago [-]
You can? Don’t check for updates every 5 minutes. Daily or even weekly would be sufficient for an app like this (if auto-updater is even necessary at all.. just show a notification)
spaqin 15 hours ago [-]
I would also put into question if you _really_ need to check for updates every 5 minutes. Once per startup is already enough, and if you're concerned about users who leave it on for days, it could easily be daily or even less often.
stevage 8 hours ago [-]
It's absolutely way too frequent.
Their users do not care about their screen recording studio anywhere near as much as the devs who wrote it do.
Once a month is probably plenty.
Personally, I disable auto-update on everything wherever possible, because the likelihood of annoying changes is much greater than welcome changes for almost all software I use, in my experience.
Lammy 14 hours ago [-]
A 5 minute update check interval is usage-reporting in disguise. Way fewer people would turn off a setting labeled “check for updates” than one labeled “report usage statistics”.
bilekas 14 hours ago [-]
Don’t give them ideas!!
knowitnone 5 hours ago [-]
or they can send report usage statistics without you knowing or being able to disable it.
blitzar 7 hours ago [-]
never attribute to malice what can be attributed to incompetence
Lammy 6 hours ago [-]
No. Eradicate this line of thinking from your brain. If the outcome is the same then the intent doesn't matter.
llmthrow103 54 minutes ago [-]
In fact, assume the opposite unless you have a reason to assume otherwise (aka a close personal relationship). Giving strangers/businesses that you have no connection to the benefit of the doubt when they harm you is a good way to get taken advantage of.
GuinansEyebrows 6 hours ago [-]
Yes and one provides cover for the other.
o11c 2 hours ago [-]
Never contort your reasoning to attribute to incompetence what is much better explained by malice. Especially when politics or money is involved, malice should be the assumed default.
YetAnotherNick 14 hours ago [-]
Do they say that they don't do any usage reporting?
TowerTall 12 hours ago [-]
from their FAQ on the buttom of the fronpage:
Screen Studio can collect basic usage data to help us improve the app, but you can opt out of it during the first launch. You can also opt out at any time in the app settings.
Spivak 14 hours ago [-]
Eh, this one is probably ignorance over malice. It's super common to see people who need to make an arbitrary interval choice go with 300 out of habit.
karhuton 14 hours ago [-]
To be as user friendly as possible, always ask if user wants automatic background updates or not. If you can’t update without user noticing it, please implement manual updates as two mechanisms:
1) Emergency update for remote exploit fixes only
2) Regular updates
The emergency update can show a popup, but only once. It should explain the security risk. But allow user to decline, as you should never interrupt work in progress. After decline leave an always visible small warning banner in the app until approved.
The regular update should never popup, only show a very mild update reminder that is NOT always visible, instead behind a menu that is frequently used. Do not show notification badges, they frustrate people with inbox type 0 condition.
This is the most user friendly way of suggesting manual updates.
You have to understand, if user has 30 pieces of software, they have to update every day of the month. That is not a good overall user experience.
zveyaeyv3sfye 11 hours ago [-]
> You have to understand, if user has 30 pieces of software, they have to update every day of the month. That is not a good overall user experience.
That's not an user issue tho, it's a "packaging and distribution of updates" issue which coincidentally has been solved for other OS:es using a package manager.
adrianN 8 hours ago [-]
Getting used to changes is not something a package manager can help with.
wqaatwt 6 hours ago [-]
Or a developer problem when they keep updating their apps every few days for no apparent reason..
tom1337 12 hours ago [-]
I'd also question if the updater needs to download the update before the user saying they want it. Why not check against a simple endpoint if a newer version is available and if so, prompt the user that an update could be downloaded and then download it. This would also allow the user to delay the update if they are on metered connections.
If the update interval had been 1 day+, they probably wouldn't have noticed after one month when they had a 5 minute update check interval.
sixtyj 8 hours ago [-]
Check for updates every 5 minutes is a bug itself ;)
It is sort of fun (for $8,000) as it was “just” a screenshotter, but imagine this with bank app or any other heavily installed app.
All cloud providers should have alerts for excessive use of network by default. And they should ask developers if they really want to turn alerts off.
I remember Mapbox app that cost much more, just because provider did charge by months… and it was a great dispute who’s fault it was…
m3adow 14 hours ago [-]
First thing I thought as well. Every 5 minutes for a screen recording software is an absurd frequency. I doubt they release multiple new versions per day.
trollbridge 8 hours ago [-]
And if it is necessary, the proper way to do this is via DNS with a record with a TTL less than 5 minutes, not pinging some webserver.
This could have easily been avoided by prompting the user for an update, not silently downloading it in the background... over and over.
ghurtado 15 hours ago [-]
IIRC, Every 5 minutes used to be the standard interval between email checks, back in the days of dialup and desktop email clients.
How the times have changed ..
lucb1e 14 hours ago [-]
It's near-instant now not usually because of more incessant polling, but because it simply keeps the connection open (can last many hours without sending a single byte, depending also on the platform) and writes data onto it as needed (IMAP IDLE). This has gotten more efficient if anything
pjmlp 14 hours ago [-]
And because how expensive they were in Portugal, I never done it, it was always on manual.
wodenokoto 14 hours ago [-]
Depends on the application. I have my browser running for months at a time.
atoav 14 hours ago [-]
Yeah but that should be a variable anyways. Maybe even a variable provided by the server. But in this case it should be on demand. with the old version cached and only downloading the new one when there is a new version once a day.
atoav 14 hours ago [-]
Yeah but that should be a variable anyways. Maybe even a variable provided by the server.
albert_e 7 hours ago [-]
What about the bandwidth burned needlessly for thousands of users on their data plans.
At some scale such careless mistakes are going to create real effects for all users of internet through congestion as well.
If this was not a $8000 mistake but was somehow covered by a free tier or other plan from Google Cloud, would they still have considered it a serious bug and fixed it as promptly?
How many such poor designs are out there generating traffic and draining common resources.
bee_rider 6 hours ago [-]
They mention specifically handling the situation for one user. So, I guess it is a case-by-case thing.
gwbas1c 5 hours ago [-]
In comparison, when I shipped a Mac desktop application:
We used Sparkle, https://sparkle-project.org/, to do our updates. IMO, it was a poor choice to "roll their own" updater.
Our application was very complicated and shipped with Mono... And it was only about ~10MB. The Windows version of our application was ~2MB and included both 32-bit and 64-bit binaries. WTF are they doing shipping a 250MB screen recorder?
So, IMO, they didn't learn their lesson. The whole article makes them look foolish.
latexr 4 hours ago [-]
> WTF are they doing shipping a 250MB screen recorder?
250 MB is just the download DMG, the app itself is almost 550 MB. It’s an Electron app.
BeFlatXIII 2 hours ago [-]
550 megs?!?!?? On Apple’s unreasonably stingy SSD sizes?
Who would be foolish enough to download that?
ericmcer 5 hours ago [-]
People are willing to trade performance/size for convenience. Writing your application using Electron + React means it is going to probably ship a > 500mb app that will suck up 500mb ram while running, but you have a much easier dev experience and can deliver a "flashy" UI with minimal effort.
gwbas1c 5 hours ago [-]
Our 10MB was also for a "much easier dev experience" on Mac. The framework we shipped was basically 4x the size of the application.
donatj 15 hours ago [-]
I am always kind of a stickler about code reviews. I once had a manager tell me that I should leave more to QA with an offhand comment along the lines of "what is the worst that could happen" to which I replied without missing a beat "We all lose our jobs. We are always one bad line of code away from losing our jobs"
The number of times I have caught junior or even experienced devs writing potential PII leaks is absolutely wild. It's just crazy easy in most systems to open yourself up to potential legal issues.
monkeyelite 6 hours ago [-]
Code reviews kill velocity - introduce context switching, and are make work, it feels like you’re doing something to make a PR etc but your not.
The context it makes the most sense is accepting code from strangers in a low trust environment.
The alternative to trying to prevent mistakes is making it easy to find and correct them. Run CI on code after it’s been merged and send out emails if it’s failed. At the end of a day produce a summary of changes and review them asynchronously. Use QA, test environments, etc.
latexr 4 hours ago [-]
> Code reviews kill velocity
This feels like a strange sense of priorities which would be satirised in a New Yorker/Far Side single-panel comic: “Sure, my mistake brought down the business and killed a dozen people, but I’m not sure you appreciate how fast I did it”.
Code should be correct and efficient. Monkeys banging their heads against a keyboard may produce code fast, but it will be brittle and you’ll have to pay the cost for it later. Of course, too many people view “later” as “when I’m no longer here and it’s no longer my problem”, which is why most of the world’s software feels like it’s held together with spit.
monkeyelite 4 hours ago [-]
> would be satirised in a New Yorker/Far Side single-panel comic:
Thanks for taking my experience and comment seriously and challenging your preconceptions.
> Code should be correct and efficient.
When it ships to customers. The goal is to find the bugs before then. Having a stable branch can be accomplished in many ways besides gating each merge with a review.
Do you have any studies to show how effective synchronous code review is in preventing mistakes? If they are such a good idea why not do 2 or 3?
latexr 4 hours ago [-]
> Thanks for taking my experience and comment seriously and challenging your preconceptions.
I apologise if my comment read as mean. I wanted to make the joke and it may have overshadowed the point.
> Do you have any studies to show how effective synchronous code review is in preventing mistakes?
I could’ve been clearer. I’m not advocating for code reviews, I’m advocating for not placing “velocity” so high on the list of priorities.
> If they are such a good idea why not do 2 or 3?
This argument doesn‘t really make sense, though. You’ve probably heard the expression “measure twice, cut once”—you don’t keep measuring over and over, you do it just enough to ensure it’s right.
monkeyelite 4 hours ago [-]
> I’m not advocating for code reviews.
Well my comment is against synchronous code reviews. So we are not in disagreement.
> you do it just enough to ensure it’s right.
I agree. Each layer of review etc is a cost and has benefits. You want to pick an appropriate level.
mugsie 3 hours ago [-]
> Code reviews kill velocity
Yes, they kill your velocity. However, the velocity of a team can be massively increased by shipping small things a lot more often.
Stable branches that sit around for weeks are the real velocity killer, and make things a lot more risky on deployment.
monkeyelite 59 minutes ago [-]
I agree with all of that - no contradiction.
ljm 2 hours ago [-]
The up-front cost of code review can be easily be tripled or quadrupled when it’s distributed over several weeks after the fact in the form of unplanned work, each instance of which incurs its own cost of context switching, as well as the cost of potential rework.
The purpose of such a review is a deliberate bottleneck in the earlier stage of development to stop it becoming a much larger bottleneck further down the line. Blocking one PR is a lot cheaper than blocking an entire release, and having a human in the loop there can ensure the change is in alignment in terms of architecture and engineering practices.
CI/CD isn’t the only way to do it but shifting left is generally beneficial even with the most archaic processes.
monkeyelite 57 minutes ago [-]
> The up-front cost of code review can be easily be tripled or quadrupled when it’s distributed over several weeks
You’re taking a more extreme position than the one I’m stating. You can review every day or every hour if you want.
> a deliberate bottleneck in the earlier stage
Wouldn’t it be better if we could catch bugs AND avoid the bottleneck? That’s the vision. Good intentions may disagree about how to accomplish that.
Capricorn2481 5 hours ago [-]
> Code reviews kill velocity - introduce context switching, and are make work
This is the same point three times, and I don't agree with it. This is like saying tests kill velocity, there's nothing high velocity about introducing bugs to your code base.
Everything introduces context switching, there's nothing special about code reviews that makes it worse than answering emails, but I'm not going to ignore an important email because of "context switching."
Everyone makes mistakes, code reviews are a way to catch those. They can also spread out the knowledge of the code base to multiple people. This is really important at small companies.
CI is great, but I have yet to see a good CI tool that catches the things I do.
monkeyelite 4 hours ago [-]
> This is the same point three times
No it isn’t. Fake work, synchronization, and context switching are all separate problems.
> code reviews are a way to catch those
I said you can do reviews - but there is no reason to stop work to do them.
Why not require two or three reviews if they are so helpful at finding mistakes?
I agree everyone makes mistakes - that’s why I would design a process around fixing mistakes, not screening for perfection.
How many times have you gone back to address review comments and introduced a regression because you no longer have the context in your head?
cbsks 1 hours ago [-]
> Why not require two or three reviews if they are so helpful at finding mistakes?
For secure software, e.g. ASIL-D, you will absolutely have a minimum 2 reviewers. And that’s just for the development branch. Merging to a release branch requires additional sign offs from the release manager, safety manager, and QA.
By design the process slows down “velocity”, but it definitely increases code quality and reduces bugs.
monkeyelite 54 minutes ago [-]
Once again let me reframe the mindset. Trying to get a perfect change where you anticipate every possible thing that will go wrong beforehand is impossible - or at least extremely costly. The alternative is to spend your effort on making it easy to find and fix problems after.
mugsie 3 hours ago [-]
> Why not require two or three reviews if they are so helpful at finding mistakes?
Places do? a lot of opensource projects have the concept of dual reviews, and a lot of code bases have CODEOWNERS to ensure the people with the context review the code, so you could have 5-10 reviewers if you do a large PR
monkeyelite 57 minutes ago [-]
Does it make the code better? The best projects are the ones with the most review l?
loeg 4 hours ago [-]
> Why not require two or three reviews if they are so helpful at finding mistakes?
Diminishing returns, of course. I have worked places where two reviews were required and it was not especially more burdensome than one, though.
I catch so many major errors in code review ~every day that it's bizarre to me that someone is advocating for zero code review.
Capricorn2481 3 hours ago [-]
> No it isn’t. Fake work, synchronization, and context switching are all separate problems
Context switching is a problem because it...kills velocity. Fake work is a problem because it kills velocity. You're saying it's time that could be better spent elsewhere, but trying to make it sound wider. I disagree with the premise.
Synchronization is a new word, unrelated to what you originally wrote.
> How many times have you gone back to address review comments and introduced a regression because you no longer have the context in your head?
Never? I am not unable to code in a branch after a few days away from it. If I were, I would want reviews for sure! Maybe you have had reviews where people are suggesting large, unnecessary structural changes, which I agree would be a waste of time. We're just looking for bug fixes and acceptably readable code. I wouldn't want reviewers opining on a new architecture they read about that morning.
monkeyelite 55 minutes ago [-]
> Synchronization is a new word, unrelated to what you originally wrote.
I believe you can figure it out.
> Never?
Ok well I’m trying to talk to people who have that problem. Because I and my team do.
ValdikSS 6 hours ago [-]
Yep, what's in most other jobs is a criminal offense, the most serious issue the individual developer could face is just to lose the job.
monkeyelite 6 hours ago [-]
If you demand accountability you need to grant authority.
canucker2016 9 hours ago [-]
...And if there's no one around to review the code?
The website makes it seem like it's a one person shop.
alias_neo 7 hours ago [-]
When I work on my own code, at home, with no-one to assist or review, I write tests, and open a PR anyway, and review it myself, sometimes the next day with fresh eyes, or even 10 minutes later after a quick walk in and out of the room and a glass of water.
If you're not confident you can review a piece of code you wrote and spot a potentially disastrous bug like the one in OP, write more tests.
zarzavat 4 hours ago [-]
Humans are very good at not spotting their own mistakes, that's why writers have editors.
jmull 7 hours ago [-]
I'm pretty conservative about adopting third-party libraries (due to the long-term issues each one has the potential to cause), but an app updater is probably worth it.
It's just tricky, basically one fat edge case, and a critical part of your recovery plan in case of serious bugs in your app.
(This bug isn't the only problem with their home-grown updater. Checking every 5 min is just insane. Kinda tells me they aren't thinking much about it.)
wolrah 6 hours ago [-]
> I'm pretty conservative about adopting third-party libraries (due to the long-term issues each one has the potential to cause), but an app updater is probably worth it.
Especially for a Mac-only application where Sparkle (https://sparkle-project.org/) has been around for almost two decades now and has been widely used across all sorts of projects to the point that it's a de facto standard. I'd be willing to bet that almost every single Mac "power user" on the planet has at least one application using Sparkle installed and most have a few.
Zambyte 7 hours ago [-]
Or better yet, let the system package manager do it's job.
wqaatwt 7 hours ago [-]
You’d be forced to use Apple’s App-Store, though? I don’t think there is an other package manager
Zambyte 6 hours ago [-]
As far as system package managers go, yeah. That's part of the price you pay for choosing Apple (Knows Best) TM. There is brew, nix and the like for applications on MacOS too though.
madeofpalk 6 hours ago [-]
Apple doesn't "know best" - it's just that that is what the system package manager is.
You can use whatever you want outside of the App Store - most will use Sparkle to handle updates https://sparkle-project.org/. I presume Windows is similar.
Zambyte 4 hours ago [-]
> Apple doesn't "know best" - it's just that that is what the system package manager is.
The fact that that is what the system package manager is is why I said Apple "knows best". You can pick from dozens of system packages managers hooked up to hundreds, if not thousands of different repos on Linux.
jarym 15 hours ago [-]
Just amazed that ‘better testing’ isn’t one of the takeaways in the summary.
Just amazed. Yea ‘write code carefully’ as if suggesting that’ll fix it is a rookie mistake.
So so frustrating when developers treat user machines like their test bed!
fifilura 14 hours ago [-]
Contrarian approach: $8000 is not a lot in this context. What did the CEO think of this? Most of the time it is just a very small speed bump in the overall finances of the company.
Avoidable, unfortunate, but the cost of slowing down development progress e.g. 10% is much higher.
But agree that senior gatekeepers should know by heart some places where review needs to be extra careful. Like security pitfalls, exponential fallback of error handling, and yeah, probably this.
stevage 8 hours ago [-]
I'm sure it cost a lot more than $8000. That was only the direct visible cost to them. There were likely users hit with costs for the additional downloads, who never even knew what was the issue. Users working on a mobile hotspot who had to pay for extra data etc etc.
latexr 4 hours ago [-]
> What did the CEO think of this?
I doubt there’s a CEO. Despite the use of “we”, pretty sure this is one guy building the app. All the copyright notices and social media go back to one person.
3 hours ago [-]
rvz 5 hours ago [-]
Imagine if that was Meta that had over 1B users with their messenger desktop app update functionality that did just that. The loss would be in the hundreds of millions.
> But agree that senior gatekeepers should know by heart some places where review needs to be extra careful. Like security pitfalls, exponential fallback of error handling, and yeah, probably this.
The lesson here is much better use of automated tests (The app likely has no tests at all) and proper use of basic testing principles like TDD would prevent such junior-level embarrassing bugs creeping up in production paid software.
That is the difference between a $100 problem vs a $200M problem.
See the case of Knight Capital [0] who lost $460M, due to a horrific deploy.
I worked on a product where there was basically no automated testing, just a huge product surface to click around with a bunch of options. Because of technical debt some of the options would trigger different code paths, but it was up to the developer to memorize all the code paths and test accordingly.
After I shipped a bug the Director of Engineering told me I should "test better" (by clicking around the app). This was about 1 step away from "just don't write bugs" IMO.
stevage 8 hours ago [-]
Yep, my first job was at a company like that. Huge Windows desktop app built in Delphi. No automated testing of any kind. No testing scripts either. Just a lot of clicking around.
cryptonym 6 hours ago [-]
My first job was exactly that, selling windows app in Delphi. I joined the new team working on .net windows apps and we had an army of people clicking on UI all day long.
They maintained their "test plan" on a custom software where they could report failures.
TBH, that was well done for what it was but really called for automation and lacked unit-testing.
HdS84 4 hours ago [-]
I am forced to use a custom kv store for my current t project. That pos has a custom dsl, which can only be imported through a swing ui, by clicking five buttons. Also, the ui is for 1024 screens, they are tiny in my 4k monitor
01HNNWZ0MV43FF 5 hours ago [-]
I remember a test plan in a spreadsheet where no test had an ID.
I wish I could teach everything I learned the hard way at that job
Klaster_1 14 hours ago [-]
How do you adjust your testing approach to catch cases like this? In my experience, timing related issues are hard to catch and can linger for years unnoticed.
doix 13 hours ago [-]
I would mock/hook/monkey patch/whatever the functions to get the current time/elapsed time, simulate a period of time (a day/week/month/year/whatever), make sure the function to download the file is called the correct amount of times. That would probably catch this bug.
Although, after such a fuck up, I would be tempted to make a pre-release check that tests the compiled binary, not any unit test or whatever. Use LD_PRELOAD to hook the system timing functions(a quick google shows that libfaketime[0] exists, but I've never used it), launch the real program and speed up time to make sure it doesn't try to download more than once.
Similar to doix said, consider reading the time as IO and then rewrite the code in sans-IO style so you can inject the time.
Then it's a unit test that looks too obvious to exist until you read the ticket mentioned in the comment above it
No need for monkey patching or hooking or preload
But before that you add a couple checkmarks to the manual pre-release test list: "1 hour soak test" and "check network transfer meters before and after, expect under 50 MB used in 1 hour (see bug #6969)"
In Linux they're under /sys/class/net I think
256_ 2 hours ago [-]
I don't think the author is wrong for saying that certain kinds of code should be written carefully. I object to the implication that other code shouldn't.
From TFA: "Write your auto-updater code very carefully. Actually, write any code that has the potential to generate costs carefully." So the focus is on code that "generate[s] costs". I think this is a common delusion programmers have; that some code is inherently unrelated to security (or cost), so they can get lazy with it. I see it like gun safety. You have to always treat a gun like it's loaded, not because it always is (although sometimes it may be loaded when you don't expect it), but because it teaches you to always be careful, so you don't absent-mindedly fall back into bad habits when you handle a loaded one.
Telling people to write code carefully sounds simplistic but I believe for some people it's genuinely the right advice.
jlarocco 5 hours ago [-]
They were using a typed language, so testing was unnnecessary ;-)
stevage 8 hours ago [-]
>Just amazed that ‘better testing’ isn’t one of the takeaways in the summary.
I don't get the impression they did any testing at all.
chinchilla2020 1 hours ago [-]
"A single line of code caused <BUG>"
Yes, a single line of code is in the stack trace every time a bug happens. Why does every headline have to push this clickbait?
All errors occur at a single line in the program - and every single line is interconnected to the rest of the program, so it's an irrelevant statement.
999900000999 6 hours ago [-]
Sloppy coding all around. If you don't want to program something right, why don't you just direct users to the website to manually update it?
On one hand it's good that the author owns up to it, and they worked with their users to provide remedies. But so many things aren't adding up. Why does your screen recorder need to check for updates every 5 minutes? Once a day is more than enough.
This screams "We don't do QA, we shorts just ship"
Cthulhu_ 6 hours ago [-]
Or, given it's a Mac app, just have the Mac app store take care of updates. That's part of the value that using the app store service gives you, the other one being not spending thousands in accidental data transfer when you do auto updates wrong.
rvz 5 hours ago [-]
> Or, given it's a Mac app, just have the Mac app store take care of updates. That's part of the value that using the app store service gives you,
And pay Apple their 30% cut on your revenue? No thanks.
> the other one being not spending thousands in accidental data transfer when you do auto updates wrong.
Or just actually write proper automated tests for basic features first, before a large refactor to prevent introducing issues like this from happening again?
While I respect the author's honesty in this mistake, the main takeaway here is not mentioned and that is just writing proper automated tests as their impression on this post is that there aren't any.
jbverschoor 4 hours ago [-]
2% of that already goes to stripe or whatever you use. after a year it's 15%. It also gives your both a distribution and marketing channel.
It was good enough for netflix etc.
*I* don't want applications to be able to update itself. Look at malware zoom for example.
It's funny that people don't like telemetry, but at the same time they're ok with regular software update checks + installs.
ryandrake 6 hours ago [-]
Software doesn't need to check for updates at all. If I want to update my software, I'll update it. I don't need or want the software to be doing it on its own. All OS's have a native package manager at this point that can handle updates. We don't need applications going around it.
999900000999 6 hours ago [-]
A quick warning "Hi User, your out of date, please update." Is fair.
What's really scary here is the lack of consent. If I want to record videos I don't necessarily have an extra 250mb to spend( many users effectively pay by the gig) everytime the developer feels like updating.
ValdikSS 7 hours ago [-]
I'm running an anti-censorship proxy service which uses Proxy Auto-Configuration (PAC) file which you can configure OS-wide or in the browser.
If the file contains invalid JS (syntax error, or too new features for IE on Win7/8), or if it's >1MB (Chromium-based browsers & Electron limit), and the file is configured system-wide, then EVERY APP which uses wininet starts flooding the server with the requests over and over almost in an endless loop (missing/short error caching).
Over the years, this resulted in DDoSing my own server and blackholing its IP on BGP level (happened 10+ times), and after switching to public IPFS gateways to serve the files, Pinata IPFS gateway has blocked entire country, on IPFS.io gateway the files were in top #2 requests for weeks (impacting operational budget of the gateway).
All of the above happens with tight per-IP per-minute request limits and other measures to conserve the bandwidth. It's used by 500 000+ users daily. My web server is a $20/mo VPS with unmetered traffic, and thanks to this, I was never in the situation as the OP :)
mgkimsal 6 hours ago [-]
> The app checks for the update every 5 minutes or when the user activates the app. Normally, when the app detected the update - it downloaded it and stopped the 5 minutes interval until the user installed it and restarted it.
This is still bad. I was really hoping the bug would have been something like "I put a 5 minute check in for devs to be able to wait and check and test a periodic update check, and forgot to revert it". That's what I expected, really.
sevg 14 hours ago [-]
Why in the world does it need to check for updates every 5 minutes?
The author seemed to enjoy calculating the massive bandwidth numbers, but didn’t stop to question whether 5 minutes was a totally ridiculous.
knowitnone 5 hours ago [-]
that's how frequent they find bugs in their app?
sota_pop 21 minutes ago [-]
Didn’t read the article, but why would an app not just check for updates on startup/shutdown?
danpalmer 15 hours ago [-]
> We decided to take responsibility and offer to cover all the costs related to this situation.
Good on them. Most companies would cap their responsibility at a refund of their own service's fees, which is understandable as you can't really predict costs incurred by those using your service, but this is going above and beyond and it's great to see.
weird-eye-issue 14 hours ago [-]
"Luckily, it was not needed"
jlarocco 5 hours ago [-]
More anecdata that commercial software is garbage, especially if it's targetting consumers.
I'll stick with open source. It may not be perfect, but at least I can improve it when it's doing something silly like checking for updates every 5 minutes.
indymike 8 hours ago [-]
We just put a header for version in our app, and when we deploy new code the client checks against the version header and upgrades if the version is mismatched. No extra get requests required. Bonus: we just use the last git commit hash as the version. Stupid simple.
01HNNWZ0MV43FF 5 hours ago [-]
I saw some project that used a DNS TXT field to check its version
That way I guess you get the caching of the DNS network for free, it uses basically one packet each way, encryption is still possible, and it can reduce the traffic greatly if a big org is running a thousand instances on the same network
I think it was written in Go. Might have been Syncthing
mimimi31 14 hours ago [-]
>Add special signals you can change on your server, which the app will understand, such as a forced update that will install without asking the user.
I understand the reasoning, but that makes it feel a bit too close to a C&C server for my liking. If the update server ever gets compromised, I imagine this could increase the damage done drastically.
Olshansky 5 hours ago [-]
Reminds me of some Twitter lore from 2012. I was just an intern....
This is back in the Rails days, before they switch to Scala.
I heard that there was a fail-whale no one could solve related to Twitter's identity service. IIRC, it was called "Gizmoduck."
The engineer who built it had left.
They brought him in for half a day of work to solve the P0.
*Supposedly*, he got paid ~50K for that day of work.
Simultaneously outrageous but also reasonable if you've seen the inside of big tech. The ROI is worth it.
That is all.
Disclaimer: don't know if it's true, but the story is cool.
ikiris 2 hours ago [-]
If a half day fix from a former employee costs that much, it’s likely because the company deserved it for some reason.
moi2388 11 hours ago [-]
Why on earth are you checking for updates every 5 minutes to begin with?!
Seriously this alone makes me question everything about this app.
dkdbejwi383 8 hours ago [-]
Probably product owner wants to show off a nice chart at their next meeting showing how quickyl users upgrade, as some kind of proxy metric for "engagement"
oldgregg 5 hours ago [-]
What's in that payload when they check for updates every 5 minutes?!
Novel dark pattern: You unchecked "Let us collect user data" but left "Automatically Update" checked... gotcha bitch!
stevage 8 hours ago [-]
I'm really surprised this could happen. As they note:
> Write your auto-updater code very carefully.
You have to be soooo careful with this stuff. Especially because your auto-updater code can brick your auto-updater.
It looks like they didn't do any testing of their auto update code at all, otherwise they would have caught it immediately.
The scale is astounding. I was briefly interested in the person that caused the error then immediately realized it was irrelevant because if a mechanism doesn't exist to catch an issue like that, then any company is living on borrowed time.
voidUpdate 13 hours ago [-]
whyyy does wikipedia not redirect mobile links to the desktop website when you have a desktop UA?
xigoi 12 hours ago [-]
Why do they have a separate mobile website at all instead of writing proper CSS to make one website work on all devices?
wodenokoto 12 hours ago [-]
Because people on desktops asking for the mobile site should be able to view the mobile site.
The url specifically asks Wikipedia to serve the mobile site.
voidUpdate 11 hours ago [-]
Well when I follow a desktop link on my phone, it redirects me to the mobile version, despite the URL specifically asking to serve the desktop site, it just doesn't work the other way around. Plus I never asked to see the mobile site, I followed a link someone else posted
BeFlatXIII 2 hours ago [-]
Why do people spam the mobile URL, leading me to degraded reading experiences?
Bug notwithstanding, checking for updates every 5 minutes is exactly the wrong way to do it.
You want to spread out update rollouts in case of a catastrophic problem. The absolute minimum should be once a day at a random time of day, preferably roll out updates over multiple days.
saretup 15 hours ago [-]
Not to mention the cost users paid to download 250 MB every 5 minutes.
ghurtado 14 hours ago [-]
It seems a bit self centered to make their lost $8000 the focus of the article.
The title should have been: "how a single line of code cost our users probably more than $8000"
stevage 7 hours ago [-]
Totally. I live in a place where many (most?) ISP plans have limited monthly downloads. I'd be so pissed off if my monthly allowance was blown by this series of boneheaded decisions.
pests 15 hours ago [-]
It was mentioned, at the bottom. One customer even had their ISP cancel their service.
felineflock 5 hours ago [-]
From the article:
"While refactoring it, I forgot to add the code to stop the 5-minute interval after the new version file was available and downloaded.
It meant the app was downloading the same 250MB file, over and over again, every 5 minutes."
indrex 11 hours ago [-]
Plenty of (valid) criticism in the comments, but I appreciate the developer for publishing it.
stevage 7 hours ago [-]
I feel a bit iffy about turning the shitty experience you imposed on your users into content for your blog.
pandemic_region 6 hours ago [-]
This 1000 times. It takes courage to open up to mistakes. As a relatively young industry, we have a lot to learn still to move away from the instinctive blaming culture surrounding such failures. In this case, it's only a file being downloaded a couple of times, nobody died or got injured.
For those interested in this topic, and how other industries (e.g. Airline industry) deal with learning from or preventing failure: Sidney Dekker is the authority in this domain. Things like Restorative Just Culture, or Field guide to understanding human error could one day apply to our industry as well: https://sidneydekker.com/books.
leoapagano 6 hours ago [-]
Ignoring the obvious question of "why does a screen recorder that checks for updates every 5 minutes need to be installed if macOS already has a screen recorder built in"—writing your own (buggy) auto updater for a macOS app, in 2025, is nuts considering you also have two existing options for auto updates at your disposal, the Mac App Store and Sparkle (https://sparkle-project.org/), both of which are now nearly two decades old.
hollow-moe 3 hours ago [-]
Lesson learned: use OBS Studio
aranw 12 hours ago [-]
I have Screen Studio and I don't leave it open but all I wish for now is that you disable the auto updater. Provide an option for it to be disabled and allow for manual update checking. Checking for an update every 5 minutes is total overkill and downloading the update automatically is just bad. What if I was on mobile internet and had limited bandwidth and usage. The last thing I want is an app downloading it's own update without my consent and knowledge.
philomath_mn 5 hours ago [-]
While this unfortunate, I am sure I also have single lines in production with greater cost and equivalent value (close to none) -- and I've only worked at small companies. I am sure some of y'all can beat this by ~2 orders of magnitude.
Databricks is happy to have us as a customer.
hardwaresofton 15 hours ago [-]
Bugs are great chances to learn.
What might be fun is figuring out all the ways this bug could have been avoided.
Another way to avoid this problem would have been using a form of “content addressable storage”. For those who are new, this is just a fancy way of saying make sure to store/distribute the hash (ex. Sha256) of what you’re distributing and store it on disk in a way that content can be effectively deduplicated by name.
It’s probably not so easy as to make it a rule, but most of the time, an update download should probably do this
ghurtado 14 hours ago [-]
> out all the ways this bug could have been avoided.
The most obvious one is setting up billing alerts.
Past a certain level of complexity, you're often better off focusing on mitigation that trying to avoid every instance of a certain kind of error.
HelloNurse 9 hours ago [-]
Note that billing alerts protect against unexpected network traffic, not directly against bugs and bad design in the software. Update checking remains a terrible idea.
surfmike 2 hours ago [-]
Contact a rep at Google, they can probably reverse a good portion of the $8000 as a one-time thing.
vachina 6 hours ago [-]
Does the developer release a tag for every ctrl+s.
huksley 11 hours ago [-]
They had no cost usage alerts. So they even did not know that the thing was happening, just realized with the first bill.
I think that is the essence of what is wrong with the cloud costs. Defaulting to possibility for everyone to scale rapidly while in reality 99% have quite predictable costs month over month.
pornel 14 hours ago [-]
It would also be nice if the update archive wasn't 250MB. Sparkle framework supports delta updates, which can cut down the traffic considerably.
mathverse 14 hours ago [-]
This is an electron app.
pornel 3 hours ago [-]
Which is even better for incremental updates.
If just some JavaScript files change, you don't need to redownload the entire Chromium blob.
dahcryn 11 hours ago [-]
which is their design choice, not an obligation.
Electron really messed up a few things in this world
aserafini 6 hours ago [-]
I had one of these, an emoji was inserted into a notification SMS which doubled SMS costs due to encoding.
Hobadee 7 hours ago [-]
Why is nobody talking about what a shady business practice it is that cloud providers don't alert you to this kind of overage by default? Sure, you can set up alerts, but when you go 10x over your baseline in a short period of time, that should trigger an alert regardless of your configured alerts.
bee_rider 6 hours ago [-]
They could compare against the baseline, I guess.
In the grand scheme of things, $8k is not much money for a business, right? Like we can be pretty sure nobody at Google said “a-ha, if we don’t notify the users, we will be able sneak $8k out of their wallets at a time.” I think it is more likely that they don’t really care that much about this market, other than generally creating an environment where their products are well known.
firesteelrain 7 hours ago [-]
Or treat it like the stock market and shut it down.
kovac 6 hours ago [-]
CI/CD at it's finest :p I guess the 5-minutely updates is correlated with the rate of bug fixes they need to push... Surely, that can't be for new features.
Looking at the summary section, I'm not convinced these guys learned the right lesson yet.
rvz 6 hours ago [-]
CI/CD is part of the solution, but it is really just proper testing.
Nothing has been learned in this post and it has costed him $8,000 because of inadequate testing.
bryanrasmussen 14 hours ago [-]
I read that as "A single line of code costs $8000" which, from the comments seems like a few others had the same thought. Reading the article it is not costs and the original title is "One line of code that did cost $8,000", so as some others have pointed out it is a bug that cost $8000.
stevage 7 hours ago [-]
I was expecting it to be about a good line of code that cost $8,000 in development time to write, which might be an interesting story.
Always42 4 hours ago [-]
$8000 is nothing for most companies, if you have 10 developers making 100k a year your burn rate is $4000 a day just for salaries.
dimatura 2 hours ago [-]
$8000 also seems pretty cheap for 2PB of traffic? Looking at google cloud storage egress rates, $0.02/GiB (which is on the lower end, since it depends on destination) would be about $40k for 2PB.
enceladus76 5 hours ago [-]
For me this shows once again that proper testing is neglected by many developers and companies. Sadly, it is not even mentioned in the advice at the end of the article.
rvz 5 hours ago [-]
Exactly. No mention of writing automated tests or even TDD at all.
It's best to save everyone by writing tests that prevent a $100 issue on your machine from becoming a costly $10M+ problem in production as the product scales after it has launched.
This won't be the last time and this is what 'vibe coding' doesn't consider and it it will introduce more issues like this.
timhigins 5 hours ago [-]
Screen studio does make the best-looking demo videos I've seen. Any favorite alternatives? Points for free or open source.
> Add special signals you can change on your server, which the app will understand, such as a forced update that will install without asking the user.
Ummm no. Even after this they haven't learned. Auto update check on app load and prompt user for download/update.
jumploops 14 hours ago [-]
Oh boy, I know of at least one case where a single line of code cost ~$500k…
Curious where the high-water mark is across all HNers (:
explodes 13 hours ago [-]
Others have reported higher already, but for data:
Our team had a bug that cost us about $120k over a week.
Another bug running on a large system had an unmeasurable cost. (Could $K, could be $M)
agos 12 hours ago [-]
I would be surprised if half of the user on this site did _not_ create or personally see a bug where a line cost way more than $8000
short_sells_poo 14 hours ago [-]
$1.2mln, gone in about 30 minutes.
coffeeenjoyer 14 hours ago [-]
I assume most of that 2PB network traffic was not egress, right? Otherwise how did it "only" cost you $8k on Google Cloud?
Even at a cost of 0.02$ per GB, which is usually a few times lower than the actual prices I could find there, that would still result in an invoice of about $40k...
CodesInChaos 5 hours ago [-]
The first 500TB should have cost $35k already. At that point pricing goes from $0.06/GB to "contact us". So I'd have expected google to charge $80k or so for the whole thing. (Unless google decided to forgive most of the cost)
epolanski 5 hours ago [-]
I'm actually surprised by how cheaply they got away with.
a_t48 5 hours ago [-]
I’ve done worse. At my very first job I wrote some slow login rewards calculation code for a mobile game that caused a black screen on startup for long enough that users thought the app was broken and closed it out. (I was simulating passing time one minute at a time in lua or some BS. Oops!) It cost the company some large fraction of my salary at the time. My boss very kindly said that it was okay, everyone ends up mucking up like that at some point in the career, and no I wasn’t fired because the company just spend a large sum teaching me a lesson. We sat down at a whiteboard and I quickly came up with a solution that could just calculate the rewards one should get between two dates - there was some complexity that made this harder than it sounds on paper, but simulating time manually was not the answer.
Fokamul 7 hours ago [-]
Truth to be told, Google Cloud console is horrible mess for new people, who just wants quickly setup API, pay and don't have time to care about it anymore.
Well, you should hire contractor to set console for you.
"Designed for MacOS", aah don't worry, you will have the money from apes back in the no time. :)
misiek08 7 hours ago [-]
So it is 2PB less of bytes written lifetimes on users disks? Interesting to count that.
mmmlinux 5 hours ago [-]
Or you could use the built in screen recorder...
arm32 4 hours ago [-]
This is SUCH an Electron moment!
charlie0 5 hours ago [-]
I wonder if that line of code was vibed.
insin 14 hours ago [-]
Knowing where to put the line: $7999 (is sadly not the story)
kimbernator 7 hours ago [-]
This cost -you- $8000. It probably cost users a lot more.
poleguy 15 hours ago [-]
Ever consider not using cloud for everything? Hosting this on traditional hosting would have limited the problem and the cost.
M95D 14 hours ago [-]
And in that case, the problem would not be discovered until 1) someone opened a bug report, which rarely happens, because any competent user would just disable auto-updates, and 2) that bug report would be investigated, which also rarely happens.
cess11 12 hours ago [-]
It's not like you are forbidden to monitor your services just because you didn't put them in big clown.
codeulike 5 hours ago [-]
This 'single line of code' headline trend is dumb. Of course a single line of code can fuck everything up, code is complicated and thats how it works. Its not knitting.
zelon88 7 hours ago [-]
> As a designer, I value the experience product I create provides to the users. And this was not even a bad experience; it was actually harmful.
$229 per year on a closed source product and this is the level of quality you can expect.
You can have all the respect for users in the world, but if you write downright hazardous code then you're only doing them a disservice. What happened to all the metered internet plans you blasted for 3 months? Are you going to make those users whole?
Learning from and owning your mistake is great and all, but you shouldn't be proud or gloating about this in any way, shape, or form. It is a very awkward and disrespectful flex on your customers.
Fokamul 7 hours ago [-]
Did you see? It's "Designed for macOS"
I would put premium edition for 3999$ at least.
We've had single characters cost, you know, millions of $. (If you're familiar with C++ and the auto keyword, it's relatively obvious why that character is "&".)
byyll 13 hours ago [-]
I'll let my employer know to update my salary or reduce my workload.
navigate8310 14 hours ago [-]
So did you pay or Google showed you mercy by chewing their potential earnings?
cyprx 15 hours ago [-]
meanwhile the CTOs plan to apply AI into their production codebases :)
watusername 7 hours ago [-]
Needs (2023) in the title.
bilekas 14 hours ago [-]
> While refactoring it, I forgot to add the code to stop the 5-minute interval after the new version file was available and downloaded.
I’m sorry but it’s exactly cases like these that should be covered by some kind of test, especially When diving into a refactor. Admittedly it’s nice to hear people share their mistakes and horror stories, I would get some stick for this at work.
meta87 3 hours ago [-]
rookie numbers
jer0me 15 hours ago [-]
(2023)
silverfrost 13 hours ago [-]
In other news a screen recorder app is a 250MB (presumably compressed) download...
zveyaeyv3sfye 11 hours ago [-]
FWIW, OBS is ~150 MB, not an electron app and actually open source.
These articles are great, but I have to one-up the blog: I recently helped a small dev team clean up a one-line mistake that cost them $95,000... which they didn't notice for three months.
The relevance is that instead of checking for a change every 5 minutes, the delay wasn't working at all, so the check ran as fast as possible in a tight loop. This was between a server and a blob storage account, so there was no network bottleneck to slow things down either.
It turns out that if you read a few megabytes 1,000 times per second all day, every day, those fractions of a cent per request are going to add up!
nikanj 15 hours ago [-]
With public sector procurement, $8000 is a pretty standard price for a line of code.
ant6n 13 hours ago [-]
Do you mean "a" line of code, or "each" line of code?
nikanj 11 hours ago [-]
A dead-simple 1000-line app? $8 million from Accenture, IBM or similar
gitroom 4 hours ago [-]
Honestly, that's a rough one. I've written dumb bugs like that before, cost nowhere near $8k tho lmao. Kinda makes me extra paranoid about what my code's actually doing in the background.
ForOldHack 3 hours ago [-]
Just $8000?
A giant ship’s engine failed. The ship’s owners tried one ‘professional’ after another but none of them could figure out how to fix the broken engine.
Then they brought in a man who had been fixing ships since he was young.
He carried a large bag of tools with him and when he arrived immediately went to work. He inspected the engine very carefully, top to bottom.
Two of the ship’s owners were there watching this man, hoping he would know what to do. After looking things over, the old man reached into his bag and pulled out a small hammer. He gently tapped something. Instantly, the engine lurched into life. He carefully put his hammer away and the engine was fixed!!!
A week later, the owners received an invoice from the old man for $10,000.
What?! the owners exclaimed. “He hardly did anything..!!!”.
So they wrote to the man; “Please send us an itemised invoice.”
screen.studio is macOS screen recording software that checks for updates every five minutes. Somehow, that alone is NOT the bug described in this post. The /other/ bug described in this blog is: their software also downloaded a 250MB update file every five minutes.
The software developers there consider all of this normal except the actual download, which cost them $8000 in bandwidth fees.
To re-cap: Screen recording software. Checks for updates every five (5) minutes. That's 12 times an hour.
I choose software based on how much I trust the judgement of the developers. Please consider if this feels like reasonable judgement to you.
There are plenty of shitty ISPs out there who would charge $$ per gigabyte after you hit a relatively small monthly cap. Even worse if you're using a mobile hotspot.
I would be mortified if my bug cost someone a few hundred bucks in overages overnight.
How on earth is a screen recording app 250 megabytes
I work with developers in SCA/SBOM and there are countless devs that seem to work by #include 'everything'. You see crap where they include a misspelled package name and then they fix it by including the right package but not removing the wrong one!.
> How on earth is a screen recording app 250 megabytes
How on earth is a screen recording app on a OS where the API to record the screen is built directly into the OS 250 megabytes?
It is extremely irresponsible to assume that your customers have infinite cheap bandwidth. In a previous life I worked with customers with remote sites (think mines or oil rigs in the middle of nowhere) where something like this would have cost them thousands of dollars per hour per computer per site.
I've read on HN that a lot of people have 10Gb Ethernet at home. /s
For me that would also be wrong, if I cannot disable it in the configuration. I do bot want to extend startup time.
It's a whole new world out there.
Any effort to use their brain shall be drastically punished. /s
So yes, it’s insane, but easy to see where the size comes from.
Also webapps are just great nowadays most OS support install PWA's fairly decently no?
ffs
For example, on Linux, it uses WebKitGTK as the browser engine, which doesn't render the same way Chrome does (which is the web view used on Windows), so multi-platform support is not totally seamless.
Using something like Servo as a lightweight, platform-independent web view seems like the way forward, but it's not ready yet.
Found this a few months ago: https://gifcap.dev/
Screen recording straight from a regular browser window, though it creates GIFs instead of video files. Links to a git repo so you can set it up locally.
I imagine if you stick to desktop the situation is less awful but still
It's about time Linux desktops adopt some form of ${XDG_WEB_ENGINES:-/opt/web_engines} convention to have web-based programs to fetch their engines as needed and play nice with each other.
I would say no, and some are actively moving away from PWA support even if they had it before.
Plus, electron et al let you hook into native system APIs whereas a PWA cannot, AFAIK.
Yes, even in metropolitan areas in developed countries in 2025.
1.5megabits/s is the still common, but Starlink is taking over.
Apparently such service is still somehow available; I found https://www.dialup4less.com with a web search. Sounds more like a novelty at this point. But "real" internet service still just doesn't work as well as it's supposed to in some places.
In point of fact, I can fairly reliably download at that rate (for example I can usually watch streaming 1080p video with only occasional interruptions). The best case has been over 20Mbit/s. (This might also be partly due to my wifi; even with a "high gain" dongle I suspect the building construction, physical location of computer vs router etc. causes issues.)
Even if it is made to CIA/GRU/chinese state security ? /s
They are building features right now. There are a lot of bugs which Microsoft will never fix, or it fixes them after years. (Double click registered on mouse single clicks, clicking "x" to close the window, closes also the window underneat, GUI elements rendered as black due to monitor not recognized etc).
Those packets consume bandwidth and device utilization, too but this is flat fee, whereas log traffic is measured per GB so we investigated where an unexpected growth came from.
Plenty of things (like playstation's telemetry endpoint, for one of many examples) just continually phones home if it can't connect.
The few hours a month of playstation uptime shows 20K dns lookups for the telemetry domain alone.
The server can return an override backoff so the server can tell the client how often or how quickly to retry.
It’s nice to have in case some bug causes increased load somewhere, you can flip a value on the server and relieve pressure from the system.
Turns out Adobe's update service on Windows reads(and I guess also writes) about 130MB of data from disk every few seconds. My disk was 90%+ full, so the usual slowdown related to this was occurring, slowing disk I/O to around 80MB/s.
Disabled the service and the issues disappeared. I bought a new laptop since, but the whole thing struck me as such an unnecessary thing to do.
I mean, why was that service reading/writing so much?
So yes it should only be once a day (and staggered), but on the other hand it's a pretty low-priority issue in the grand scheme of things.
Much more importantly, it should ask before downloading rather than auto-download. Automatic downloads are the bane of video calls...
The tone might be somewhat charged, but this seems like a fair criticism. I can’t imagine many pieces of software that would need to check for updates quite that often. Once a day seems more than enough, outside of the possibility of some critical, all consuming RCE. Or maybe once an hour, if you want to be on the safe side.
I think a lot of people are upset with software that they run on their machines doing things that aren’t sensible.
For example, if I wrote a program that allows you to pick files to process (maybe some front end for ffmpeg or something like that) and decided to keep an index of your entire file system and rebuild it frequently just to add faster search functionality, many people would find that to be wasteful both in regards to CPU, RAM and I/O, alongside privacy/security, although others might not care or even know why their system is suddenly slow.
Why not just follow every Mac app under the sun and prompt if there's an update when the app is launched and download only if the user accepts?
Once a day would surely be sufficient.
Data centers are big and scary, no body wanted to run their own. The hypothetical cost savings of firing half the IT department was too good to pass up.
AWS even offered some credits to get started, first hit's free.
Next thing you know your AWS spend is out if control. It just keeps growing and growing and growing. Instead of writing better software, which might slow down development, just spend more money.
Ultimately in most cases it's cheaper in the short term to give AWS more money.
Apart of me wants to do a 5$ VPS challenge. How many users can you serve with 5$ per month. Maybe you actually need to understand what your server is doing ?
I'm talking non sense, I know.
Except this is unironically a great value proposition.
Correction--many have years of inexperience. Plenty of people that do things like this have "7 years designing cloud-native APIs".
I lost it
Don't forget the Java + Kafka consultants telling you to deploy your complicated "micro-service" to AWS and you ending up spending tens of millions on their "enterprise optimized compliant best practice™" solution which you end up needing to raise money every 6 months instead of saving costs as you scale up.
Instead, you spin up more VMs and pods to "solve" the scaling issue, which you lose even more money.
It is a perpetual scam.
Weekly or monthly would be sufficient. I'd also like "able to be disabled manually, permanently" as an option, too.
There are only a few applications with exposed attack surface (i.e. accept incoming requests from the network) and a large enough install base to cause "massive damage all of the Internet". A desktop screen recorder app has no business being constructed in a manner that's "wormable", nor an install base that would result in significant replication.
The software that we need the "average user" to update is stuff like operating systems. OS "manufacturers" have that mostly covered for desktop OS's now.
Microsoft, even though their Customers were hit with the "SQL Slammer" worm, doesn't force automatic updates for the SQL Server. Likewise, they restrict forcing updates only to mainstream desktop OS SKUs. Their server, embedded, and "Enterprise" OS SKUs can be configured to never update.
Well they might need to rush out a fix to a bug that could be harmful for the user if they don't get it faster.
For example, a bug that causes them to download 250MB every 5 minutes.
Good way of showing adoption and growth.
Nobody under any circumstances needs usage stats with 5 minute resolution. And certainly not a screen recorder.
Websites get this data pretty much by default and they don't need consent for it.
No, it doesn't mean that.
Auto updater introduced series of bad outcomes.
- Downloading update without consent, causing traffic for client.
- Not only that, the download keeps repeating itself every 5 minutes? You did at least detect whether user is on metered connection, right... ?
- A bug where update popup interrupts flow
- A popup is a bad thing on itself you do to your users. I think it is OK when closing the app and let the rest be done in background.
- Some people actually pay attention to outgoing connections apps make and even a simple update check every 5 minutes is excessive. Why even do it while app is running? Do on startup and ask on close. Again some complexity: Assume you're not on network, do it in background and don't bother retrying much.
- Additional complexity for app that caused all of the above. And it came with a price tag to developer.
Wouldn't app store be perfect way to handle updates in this case to offload the complexity there?
Thinking of it, the discussed do-it-yourself update checking is so stupid that malice and/or other serious bugs should be assumed.
Going back to the blog post and re-reading it with this possibility in mind is quite a trip.
> It turns out thousands of our users had the app running in the background, even though they were not using it or checking it for weeks (!). It meant thousands of users had auto-updater constantly running and downloading the new version file (250MB) over and over again every 5 minutes
This could easily have been data exfiltration from client computers instead, and few (besides the guy whose internet contract got cancelled for heavy traffic) would have noticed.
Screen Studio has 32k followers, lets say 6% are end users, 2000 users at $229, that is $137k in App Store fees.
I am going to say writing your own app update script is a wash time wise, as getting your app published is not trivial, especially for an app that requires as many permissions as screen studio.
If you’re a small shop or solo dev, it is real hard to justify going native on three platforms when electron gives it for (near) free. And outside of HN, no one seems to blink at a 250MB bundle.
There are alternatives like Tauri that use the system browser and allow substantially smaller bundles, but they’re not nearly as mature as Electron, and you will get cross platform UI bugs (some of which vary by user’s OS version!) from the lack of standardization.
Please, many people connect to the internet via a mobile phone hotspot, at least occasionally.
This bug would likely cause you to go through your entire monthly data in a few hours or less.
You should probably not roll your own auto-updater.
If you do, checking every 5 minutes for updates is waaaay too often (and likely hurts battery life by triggering the radio).
And triggering a download without a user-prompt also feels hostile to me.
The app size compounds the problem here, but the core issue is bad choices around auto-updating
I’d actually seen this project before because the author did a nice write up on using React portal to portal into electron windows[1], which is something I decided to do in my app.
I’d just assumed his was a cross platform project.
1: https://pietrasiak.com/creating-multi-window-electron-apps-u...
I can remember when I would have to leave a 250MB download running overnight.
Before that, I can remember when it would have filled my primary hard drive more than six times over.
... Why can't the app-specific code just get plugged into a common, reusable Electron client?
Tauri is an alternative framework that uses whatever web view the OS provides, saving ~200mb bundle size. On Mac that’s a (likely outdated) version of Safari. On Windows it’ll be Edge. Not sure what Linux uses, I’d guess it varies by distro.
The promise of Electron (and it’s an amazing value prop) is that your HTML/JS UI will always look and work the same as in your dev environment, no matter what OS the host is running.
I don’t have the time or inclination to test my app on the most recent 3 releases of the most popular operating systems every time I change something in the view layer. With Electron, I trade bundle size for not having to do so.
I do think alternatives like Tauri are compelling for simple apps with limited UI, or where a few UI glitches are acceptable (e.g. an internal app). Or for teams that can support the QA burden.
Open QuickTime and hit Command-Shift-N. Press record.
That was a thing I thought was missing from this writeup. Ideally you only roll up the update to a small percent of users. You then check to see if anything broke (no idea how long to wait, 1 day?). Then you increase the percent a little more (say, 1% to 5%) and wait a day again and check. Finally you update everyone (who has updates on)
But then the HN crowd would complain "why use an app store? that's gate keeping, apple could remove your app any day, just give me a download link, and so on..."
You literally can't win.
Their users do not care about their screen recording studio anywhere near as much as the devs who wrote it do.
Once a month is probably plenty.
Personally, I disable auto-update on everything wherever possible, because the likelihood of annoying changes is much greater than welcome changes for almost all software I use, in my experience.
Screen Studio can collect basic usage data to help us improve the app, but you can opt out of it during the first launch. You can also opt out at any time in the app settings.
1) Emergency update for remote exploit fixes only
2) Regular updates
The emergency update can show a popup, but only once. It should explain the security risk. But allow user to decline, as you should never interrupt work in progress. After decline leave an always visible small warning banner in the app until approved.
The regular update should never popup, only show a very mild update reminder that is NOT always visible, instead behind a menu that is frequently used. Do not show notification badges, they frustrate people with inbox type 0 condition.
This is the most user friendly way of suggesting manual updates.
You have to understand, if user has 30 pieces of software, they have to update every day of the month. That is not a good overall user experience.
That's not an user issue tho, it's a "packaging and distribution of updates" issue which coincidentally has been solved for other OS:es using a package manager.
If the update interval had been 1 day+, they probably wouldn't have noticed after one month when they had a 5 minute update check interval.
It is sort of fun (for $8,000) as it was “just” a screenshotter, but imagine this with bank app or any other heavily installed app.
All cloud providers should have alerts for excessive use of network by default. And they should ask developers if they really want to turn alerts off.
I remember Mapbox app that cost much more, just because provider did charge by months… and it was a great dispute who’s fault it was…
This could have easily been avoided by prompting the user for an update, not silently downloading it in the background... over and over.
How the times have changed ..
At some scale such careless mistakes are going to create real effects for all users of internet through congestion as well.
If this was not a $8000 mistake but was somehow covered by a free tier or other plan from Google Cloud, would they still have considered it a serious bug and fixed it as promptly?
How many such poor designs are out there generating traffic and draining common resources.
We used Sparkle, https://sparkle-project.org/, to do our updates. IMO, it was a poor choice to "roll their own" updater.
Our application was very complicated and shipped with Mono... And it was only about ~10MB. The Windows version of our application was ~2MB and included both 32-bit and 64-bit binaries. WTF are they doing shipping a 250MB screen recorder?
So, IMO, they didn't learn their lesson. The whole article makes them look foolish.
250 MB is just the download DMG, the app itself is almost 550 MB. It’s an Electron app.
Who would be foolish enough to download that?
The number of times I have caught junior or even experienced devs writing potential PII leaks is absolutely wild. It's just crazy easy in most systems to open yourself up to potential legal issues.
The context it makes the most sense is accepting code from strangers in a low trust environment.
The alternative to trying to prevent mistakes is making it easy to find and correct them. Run CI on code after it’s been merged and send out emails if it’s failed. At the end of a day produce a summary of changes and review them asynchronously. Use QA, test environments, etc.
This feels like a strange sense of priorities which would be satirised in a New Yorker/Far Side single-panel comic: “Sure, my mistake brought down the business and killed a dozen people, but I’m not sure you appreciate how fast I did it”.
Code should be correct and efficient. Monkeys banging their heads against a keyboard may produce code fast, but it will be brittle and you’ll have to pay the cost for it later. Of course, too many people view “later” as “when I’m no longer here and it’s no longer my problem”, which is why most of the world’s software feels like it’s held together with spit.
Thanks for taking my experience and comment seriously and challenging your preconceptions.
> Code should be correct and efficient.
When it ships to customers. The goal is to find the bugs before then. Having a stable branch can be accomplished in many ways besides gating each merge with a review.
Do you have any studies to show how effective synchronous code review is in preventing mistakes? If they are such a good idea why not do 2 or 3?
I apologise if my comment read as mean. I wanted to make the joke and it may have overshadowed the point.
> Do you have any studies to show how effective synchronous code review is in preventing mistakes?
I could’ve been clearer. I’m not advocating for code reviews, I’m advocating for not placing “velocity” so high on the list of priorities.
> If they are such a good idea why not do 2 or 3?
This argument doesn‘t really make sense, though. You’ve probably heard the expression “measure twice, cut once”—you don’t keep measuring over and over, you do it just enough to ensure it’s right.
Well my comment is against synchronous code reviews. So we are not in disagreement.
> you do it just enough to ensure it’s right.
I agree. Each layer of review etc is a cost and has benefits. You want to pick an appropriate level.
Yes, they kill your velocity. However, the velocity of a team can be massively increased by shipping small things a lot more often.
Stable branches that sit around for weeks are the real velocity killer, and make things a lot more risky on deployment.
The purpose of such a review is a deliberate bottleneck in the earlier stage of development to stop it becoming a much larger bottleneck further down the line. Blocking one PR is a lot cheaper than blocking an entire release, and having a human in the loop there can ensure the change is in alignment in terms of architecture and engineering practices.
CI/CD isn’t the only way to do it but shifting left is generally beneficial even with the most archaic processes.
You’re taking a more extreme position than the one I’m stating. You can review every day or every hour if you want.
> a deliberate bottleneck in the earlier stage
Wouldn’t it be better if we could catch bugs AND avoid the bottleneck? That’s the vision. Good intentions may disagree about how to accomplish that.
This is the same point three times, and I don't agree with it. This is like saying tests kill velocity, there's nothing high velocity about introducing bugs to your code base.
Everything introduces context switching, there's nothing special about code reviews that makes it worse than answering emails, but I'm not going to ignore an important email because of "context switching."
Everyone makes mistakes, code reviews are a way to catch those. They can also spread out the knowledge of the code base to multiple people. This is really important at small companies.
CI is great, but I have yet to see a good CI tool that catches the things I do.
No it isn’t. Fake work, synchronization, and context switching are all separate problems.
> code reviews are a way to catch those
I said you can do reviews - but there is no reason to stop work to do them.
Why not require two or three reviews if they are so helpful at finding mistakes?
I agree everyone makes mistakes - that’s why I would design a process around fixing mistakes, not screening for perfection.
How many times have you gone back to address review comments and introduced a regression because you no longer have the context in your head?
For secure software, e.g. ASIL-D, you will absolutely have a minimum 2 reviewers. And that’s just for the development branch. Merging to a release branch requires additional sign offs from the release manager, safety manager, and QA.
By design the process slows down “velocity”, but it definitely increases code quality and reduces bugs.
Places do? a lot of opensource projects have the concept of dual reviews, and a lot of code bases have CODEOWNERS to ensure the people with the context review the code, so you could have 5-10 reviewers if you do a large PR
Diminishing returns, of course. I have worked places where two reviews were required and it was not especially more burdensome than one, though.
I catch so many major errors in code review ~every day that it's bizarre to me that someone is advocating for zero code review.
Context switching is a problem because it...kills velocity. Fake work is a problem because it kills velocity. You're saying it's time that could be better spent elsewhere, but trying to make it sound wider. I disagree with the premise.
Synchronization is a new word, unrelated to what you originally wrote.
> How many times have you gone back to address review comments and introduced a regression because you no longer have the context in your head?
Never? I am not unable to code in a branch after a few days away from it. If I were, I would want reviews for sure! Maybe you have had reviews where people are suggesting large, unnecessary structural changes, which I agree would be a waste of time. We're just looking for bug fixes and acceptably readable code. I wouldn't want reviewers opining on a new architecture they read about that morning.
I believe you can figure it out.
> Never?
Ok well I’m trying to talk to people who have that problem. Because I and my team do.
The website makes it seem like it's a one person shop.
If you're not confident you can review a piece of code you wrote and spot a potentially disastrous bug like the one in OP, write more tests.
It's just tricky, basically one fat edge case, and a critical part of your recovery plan in case of serious bugs in your app.
(This bug isn't the only problem with their home-grown updater. Checking every 5 min is just insane. Kinda tells me they aren't thinking much about it.)
Especially for a Mac-only application where Sparkle (https://sparkle-project.org/) has been around for almost two decades now and has been widely used across all sorts of projects to the point that it's a de facto standard. I'd be willing to bet that almost every single Mac "power user" on the planet has at least one application using Sparkle installed and most have a few.
You can use whatever you want outside of the App Store - most will use Sparkle to handle updates https://sparkle-project.org/. I presume Windows is similar.
The fact that that is what the system package manager is is why I said Apple "knows best". You can pick from dozens of system packages managers hooked up to hundreds, if not thousands of different repos on Linux.
Just amazed. Yea ‘write code carefully’ as if suggesting that’ll fix it is a rookie mistake.
So so frustrating when developers treat user machines like their test bed!
Avoidable, unfortunate, but the cost of slowing down development progress e.g. 10% is much higher.
But agree that senior gatekeepers should know by heart some places where review needs to be extra careful. Like security pitfalls, exponential fallback of error handling, and yeah, probably this.
I doubt there’s a CEO. Despite the use of “we”, pretty sure this is one guy building the app. All the copyright notices and social media go back to one person.
> But agree that senior gatekeepers should know by heart some places where review needs to be extra careful. Like security pitfalls, exponential fallback of error handling, and yeah, probably this.
The lesson here is much better use of automated tests (The app likely has no tests at all) and proper use of basic testing principles like TDD would prevent such junior-level embarrassing bugs creeping up in production paid software.
That is the difference between a $100 problem vs a $200M problem.
See the case of Knight Capital [0] who lost $460M, due to a horrific deploy.
[0] https://www.henricodolfing.com/2019/06/project-failure-case-...
After I shipped a bug the Director of Engineering told me I should "test better" (by clicking around the app). This was about 1 step away from "just don't write bugs" IMO.
TBH, that was well done for what it was but really called for automation and lacked unit-testing.
I wish I could teach everything I learned the hard way at that job
Although, after such a fuck up, I would be tempted to make a pre-release check that tests the compiled binary, not any unit test or whatever. Use LD_PRELOAD to hook the system timing functions(a quick google shows that libfaketime[0] exists, but I've never used it), launch the real program and speed up time to make sure it doesn't try to download more than once.
[0] https://github.com/wolfcw/libfaketime
Then it's a unit test that looks too obvious to exist until you read the ticket mentioned in the comment above it
No need for monkey patching or hooking or preload
But before that you add a couple checkmarks to the manual pre-release test list: "1 hour soak test" and "check network transfer meters before and after, expect under 50 MB used in 1 hour (see bug #6969)"
In Linux they're under /sys/class/net I think
From TFA: "Write your auto-updater code very carefully. Actually, write any code that has the potential to generate costs carefully." So the focus is on code that "generate[s] costs". I think this is a common delusion programmers have; that some code is inherently unrelated to security (or cost), so they can get lazy with it. I see it like gun safety. You have to always treat a gun like it's loaded, not because it always is (although sometimes it may be loaded when you don't expect it), but because it teaches you to always be careful, so you don't absent-mindedly fall back into bad habits when you handle a loaded one.
Telling people to write code carefully sounds simplistic but I believe for some people it's genuinely the right advice.
I don't get the impression they did any testing at all.
Yes, a single line of code is in the stack trace every time a bug happens. Why does every headline have to push this clickbait?
All errors occur at a single line in the program - and every single line is interconnected to the rest of the program, so it's an irrelevant statement.
On one hand it's good that the author owns up to it, and they worked with their users to provide remedies. But so many things aren't adding up. Why does your screen recorder need to check for updates every 5 minutes? Once a day is more than enough.
This screams "We don't do QA, we shorts just ship"
And pay Apple their 30% cut on your revenue? No thanks.
> the other one being not spending thousands in accidental data transfer when you do auto updates wrong.
Or just actually write proper automated tests for basic features first, before a large refactor to prevent introducing issues like this from happening again?
While I respect the author's honesty in this mistake, the main takeaway here is not mentioned and that is just writing proper automated tests as their impression on this post is that there aren't any.
It was good enough for netflix etc.
*I* don't want applications to be able to update itself. Look at malware zoom for example.
It's funny that people don't like telemetry, but at the same time they're ok with regular software update checks + installs.
What's really scary here is the lack of consent. If I want to record videos I don't necessarily have an extra 250mb to spend( many users effectively pay by the gig) everytime the developer feels like updating.
If the file contains invalid JS (syntax error, or too new features for IE on Win7/8), or if it's >1MB (Chromium-based browsers & Electron limit), and the file is configured system-wide, then EVERY APP which uses wininet starts flooding the server with the requests over and over almost in an endless loop (missing/short error caching).
Over the years, this resulted in DDoSing my own server and blackholing its IP on BGP level (happened 10+ times), and after switching to public IPFS gateways to serve the files, Pinata IPFS gateway has blocked entire country, on IPFS.io gateway the files were in top #2 requests for weeks (impacting operational budget of the gateway).
All of the above happens with tight per-IP per-minute request limits and other measures to conserve the bandwidth. It's used by 500 000+ users daily. My web server is a $20/mo VPS with unmetered traffic, and thanks to this, I was never in the situation as the OP :)
This is still bad. I was really hoping the bug would have been something like "I put a 5 minute check in for devs to be able to wait and check and test a periodic update check, and forgot to revert it". That's what I expected, really.
The author seemed to enjoy calculating the massive bandwidth numbers, but didn’t stop to question whether 5 minutes was a totally ridiculous.
Good on them. Most companies would cap their responsibility at a refund of their own service's fees, which is understandable as you can't really predict costs incurred by those using your service, but this is going above and beyond and it's great to see.
I'll stick with open source. It may not be perfect, but at least I can improve it when it's doing something silly like checking for updates every 5 minutes.
That way I guess you get the caching of the DNS network for free, it uses basically one packet each way, encryption is still possible, and it can reduce the traffic greatly if a big org is running a thousand instances on the same network
I think it was written in Go. Might have been Syncthing
I understand the reasoning, but that makes it feel a bit too close to a C&C server for my liking. If the update server ever gets compromised, I imagine this could increase the damage done drastically.
This is back in the Rails days, before they switch to Scala.
I heard that there was a fail-whale no one could solve related to Twitter's identity service. IIRC, it was called "Gizmoduck."
The engineer who built it had left.
They brought him in for half a day of work to solve the P0.
*Supposedly*, he got paid ~50K for that day of work.
Simultaneously outrageous but also reasonable if you've seen the inside of big tech. The ROI is worth it.
That is all.
Disclaimer: don't know if it's true, but the story is cool.
Seriously this alone makes me question everything about this app.
Novel dark pattern: You unchecked "Let us collect user data" but left "Automatically Update" checked... gotcha bitch!
> Write your auto-updater code very carefully.
You have to be soooo careful with this stuff. Especially because your auto-updater code can brick your auto-updater.
It looks like they didn't do any testing of their auto update code at all, otherwise they would have caught it immediately.
Previous discussion: https://news.ycombinator.com/item?id=35858778
https://en.m.wikipedia.org/wiki/Knight_Capital_Group#2012_st...
440m usd
The url specifically asks Wikipedia to serve the mobile site.
You want to spread out update rollouts in case of a catastrophic problem. The absolute minimum should be once a day at a random time of day, preferably roll out updates over multiple days.
The title should have been: "how a single line of code cost our users probably more than $8000"
For those interested in this topic, and how other industries (e.g. Airline industry) deal with learning from or preventing failure: Sidney Dekker is the authority in this domain. Things like Restorative Just Culture, or Field guide to understanding human error could one day apply to our industry as well: https://sidneydekker.com/books.
Databricks is happy to have us as a customer.
What might be fun is figuring out all the ways this bug could have been avoided.
Another way to avoid this problem would have been using a form of “content addressable storage”. For those who are new, this is just a fancy way of saying make sure to store/distribute the hash (ex. Sha256) of what you’re distributing and store it on disk in a way that content can be effectively deduplicated by name.
It’s probably not so easy as to make it a rule, but most of the time, an update download should probably do this
The most obvious one is setting up billing alerts.
Past a certain level of complexity, you're often better off focusing on mitigation that trying to avoid every instance of a certain kind of error.
I think that is the essence of what is wrong with the cloud costs. Defaulting to possibility for everyone to scale rapidly while in reality 99% have quite predictable costs month over month.
If just some JavaScript files change, you don't need to redownload the entire Chromium blob.
Electron really messed up a few things in this world
In the grand scheme of things, $8k is not much money for a business, right? Like we can be pretty sure nobody at Google said “a-ha, if we don’t notify the users, we will be able sneak $8k out of their wallets at a time.” I think it is more likely that they don’t really care that much about this market, other than generally creating an environment where their products are well known.
Looking at the summary section, I'm not convinced these guys learned the right lesson yet.
Nothing has been learned in this post and it has costed him $8,000 because of inadequate testing.
It's best to save everyone by writing tests that prevent a $100 issue on your machine from becoming a costly $10M+ problem in production as the product scales after it has launched.
This won't be the last time and this is what 'vibe coding' doesn't consider and it it will introduce more issues like this.
https://news.ycombinator.com/item?id=43816419
Ummm no. Even after this they haven't learned. Auto update check on app load and prompt user for download/update.
Curious where the high-water mark is across all HNers (:
Our team had a bug that cost us about $120k over a week.
Another bug running on a large system had an unmeasurable cost. (Could $K, could be $M)
Well, you should hire contractor to set console for you.
"Designed for MacOS", aah don't worry, you will have the money from apes back in the no time. :)
$229 per year on a closed source product and this is the level of quality you can expect.
You can have all the respect for users in the world, but if you write downright hazardous code then you're only doing them a disservice. What happened to all the metered internet plans you blasted for 3 months? Are you going to make those users whole?
Learning from and owning your mistake is great and all, but you shouldn't be proud or gloating about this in any way, shape, or form. It is a very awkward and disrespectful flex on your customers.
Set up daily emails.
Set up cost anomaly alerts.
I’m sorry but it’s exactly cases like these that should be covered by some kind of test, especially When diving into a refactor. Admittedly it’s nice to hear people share their mistakes and horror stories, I would get some stick for this at work.
https://obsproject.com/
The relevance is that instead of checking for a change every 5 minutes, the delay wasn't working at all, so the check ran as fast as possible in a tight loop. This was between a server and a blob storage account, so there was no network bottleneck to slow things down either.
It turns out that if you read a few megabytes 1,000 times per second all day, every day, those fractions of a cent per request are going to add up!
A giant ship’s engine failed. The ship’s owners tried one ‘professional’ after another but none of them could figure out how to fix the broken engine.
Then they brought in a man who had been fixing ships since he was young. He carried a large bag of tools with him and when he arrived immediately went to work. He inspected the engine very carefully, top to bottom.
Two of the ship’s owners were there watching this man, hoping he would know what to do. After looking things over, the old man reached into his bag and pulled out a small hammer. He gently tapped something. Instantly, the engine lurched into life. He carefully put his hammer away and the engine was fixed!!!
A week later, the owners received an invoice from the old man for $10,000.
What?! the owners exclaimed. “He hardly did anything..!!!”.
So they wrote to the man; “Please send us an itemised invoice.”
The man sent an invoice that read:
Tapping with a hammer………………….. $2.00
Knowing where to tap…………………….. $9,998.00