I'm surprised - simply because I never get Pinterest results on Google. Now admittedly most of my searches aren't the kind where Pinterest is likely to have relevant results, but even then, surely I'd at least see them _sometimes_. But I literally can't remember the last time I saw a Pinterest search result.
Unless, as you suggest, they take over Google Images but not text search results? I could believe that I use Image search sufficiently rarely that I wouldn't have seen a Pinterest result.
I only get Pinterest results when I'm searching for something generic enough, and in those cases, why not use an image from Pinterest. I don't really understand the hate.
It's a nightmare for finding the original sources of images. For example, I was looking for a new sink basin and doing some quick image searches for various styles.
All the ones I liked were pinterest posts with zero attribution. A reverse image search then just brings up dozens of ripped and reposted copies of that pinterest post, also without attribution.
I assume with the 'popularity' bias (probably not the right phrase) in the modern internet this is pretty much the future of search. Someone comes up with something cool, posts a pix, and someone else puts it on Twit/Face/Tube/whatever and it gets reposted over and over and over and since the original is some worthless peon as far as the algorithm is concerned you'll never, ever find them.
I wonder if that's something that can be addressed by embedding the right metadata into images/videos? Most people don't bother even checking e.g. Exif data (let alone stripping or otherwise altering it) when reposting content they find online.
I can't speak for every platform but when I was working with frequent photo posts, most in-camera or post-editing metadata was stripped out on instagram and facebook. Some smaller sites like Gab didn't seem to mess with it as much, but the bulk did. I wouldn't be surprised if all of the other big ones did, too.
It was incredibly disheartening to have no recourse to attribute my own work, other than to smear some gross watermark on it. The automatic removal of that metadata, along with AI image generation, are some of the reasons why I gave up on the hobby entirely.
It's incredibly hard and stressful to derive any sort of pleasure or interest from something when the second it's exposed to the internet, any sense of humanity you tried to attach to it is stripped away, burned, and commercialized for the monetary benefit of some ethereal financier. It's the sound of an invisible vacuum cleaner, whisking away any sense of joy or life you wanted to share with the world for common love; the death of sharing. For-pay hugs.
Pinterest is always a dead end for me. I don't have an account, so I can't actually access anything that the link is taking me to. It's a giant turd in my search results.
And even if you are logged in, good chances you get redirected to some other useless page rather than the image you were trying to view. Or if you're not logged in, by the time you do get logged back in, you lose the original link and you're dumped on a random feed.
S3 is great for being able to stick files somewhere and not have to think about any of the surrounding infrastructure on an ongoing basis [1]. You don't have to worry about keeping a RAID server, swapping out disks when one fails, etc.
For static hosting, it's fine, but as you say, it's not necessarily the cheapest, though you can bring the cost down by sticking a CDN (Cloudflare/CloudFront) in front of it. There are other use cases where it really shines though.
[1]: I say ongoing basis because you will need to figure out your security controls, etc. at the beginning so it's not totally no-thought.
> I’ve seen other examples where customers guess at new APIs they hope that S3 will launch, and have scripts that run in the background probing them for years! When we launch new features that introduce new REST verbs, we typically have a dashboard to report the call frequency of requests to it, and it’s often the case that the team is surprised that the dashboard starts posting traffic as soon as it’s up, even before the feature launches, and they discover that it’s exactly these customer probes, guessing at a new feature.
This surprises me; has anyone done something similar and benefitted from it? It's the sort of thing where I feel like you'd maybe get a result 1% of the time if that, and then only years later when everyone has moved on from the problem they were facing at the time...
The US has a very weak legal deposit scheme compared to e.g. the UK. IIRC, legal deposit is only required where the author applies for copyright registration, so it’s extremely unlikely that a newspaper would be subject to the legal deposit scheme.
Ah! So step 2 is wait for the spammers to automate blacklisting of Daisy phone numbers, and only then start rolling out a (paid) Daisy option to customers.
Not connecting calls doesn't waste spammer money, but maybe Daisy does.
If the big telco can find 10 righteous callers from a a bad actor telecom, they should keep routing the calls.
Then, once the spammers have blacklisted the Daisy numbers, cycle those spam-free numbers to their customers and start a new batch of Daisy numbers. This way, there is a constant flow of spammer free numbers being cycled into the pool. Of course, everyone and their dog wants your phone number, so you will have to be careful who you give it to if you want it to stay spam-free.
As long as the scammer's paying to route the call, I'm ok with this. And the telcos' fitness function for their pool of robogrannies should be time-spent-on-call. Making it uneconomic is the way to kill it.
My friend works for a big telco and is the guy fixing this problem for them. They have amazing powers of deception when they need it. New numbers can be conjured up at any time.
The new fad among wireless carriers here in the US is to route what they think are spam calls to a fake voicemail box.
Voicemail that is left in this generic voice mail box never makes it to their customer and the customer is completely unaware that some of their calls have been diverted.
Then suddenly, calls from consenting callers to consenting receivers are labeled as spam and blocked. What can you do about it? Nothing. Switch to email, I guess. Oh wait, same problem.
You sound like an advocate for telemarketers. Am I correct?
I doubt very seriously that the pool of people who have knowingly and intentionally and explicitly opted in/consented to telemarketing - that is, without any dark pattern involvement and with a clear and unmbiguous consent experience, is very large. In fact I think it is infinitesimal because I can’t recall seeing such a consent UX- they ALL involve dark patterns. And if you pair that with “marketer who diligently implements all state & FTC requirements and does timely and accurate processing of removal requests, I think that the 3 relationships left are web app UX testers.
I think the world would be a better place without telemarketing or email marketing. Maybe a “one email per year” limit per merchant who you have actually paid money to and not opted out of.
I’m not OP, but my worry is about the false positives. I have real inbound calls and emails getting detected as spam all the time. Luckily my VoIP provider has a spam box I can look in, but at this point I just have to go through them every so often to make sure I’m not missing anything important.
If the telecoms can perfectly predict the telemarketers, then I’d love it. But in practice how often is this going to block people I know from calling me? Probably not never, and then we just have to give up on phones as a reliable method of communication.
Exactly. Many people want to be able to receive phone calls from their doctors, airlines and schools. These types of B2C calls are presumably most likely to be marked as spam in the event of false positives.
I’ll take my chances. 99%of the people I want to talk to either email/text me first or are already in my contacts list (which I’m not really all that picky about). I’ll accept that failure rate.
Isn't "emailing you first" just kicking the can down the road? What stops spam emails getting through to you? (Besides the exact kind of heuristic filtering you seem to be objecting to, that is.)
Disneyland will have a handful of IPs and phone numbers, and I'd bet my hat will have a team aggressively calling any ISP or provider that flags them as spam.
Bulk scams by mail are at least less common because mail fraud is investigated pretty seriously and results in federal felony charges. Not to mention the cost of initiation is much higher. Unfortunately individuals are still sometimes targeted.
> Not to mention the cost of initiation is much higher
This is the thing we screwed up for email and phone (after per call fees dropped to zero).
It's not rocket science to create systems that net to zero for common usage (balanced in-bound vs out-bound), but charge an arm and a leg for bulk senders.
Until you're running a file server or the equivalent. There has to be some way for a willing recipient to zero-rate or reverse-charge the responses to their requests. The Internet gets this wrong.
The physical mail spammers know to only use deceptive tricks, like "FINAL NOTICE" or pretending to be affiliated with you using some publicly available information. I have not yet seen one dare to full-on lie, because there would be real consequences.
If a scammer puts "FINAL NOTICE" on a solicitation they mailed with no prior relationship, I do still report it as fraud. But that's probably wishful thinking.
Better yet, route all calls for all disconnected/unassigned numbers in their part of the numbering plan to it. It would probably kill robocalling overnight.
I hate to say this.. but I find this very difficult to believe..
I don't think any telco puts effort into stopping spammers.. I'd like them to but I don't think it's something they either can care or legally capable of fixing.
I work for a telco, though not in that department. We put a lot of effort into trying to block spam calls, and adapting systems to the newest tricks. The reason why the results aren't better is (I'm being told) a combination between IP telephony making reliable source tracing all but impossible, and common carrier laws which mean that you can't block a call unless you're 100% certain it's a scam, otherwise you open yourself up to being sued.
My understanding is that the crush risk at Euston is entirely an operational issue of Network Rail's making (NR being the station facility owner), by deliberately not announcing platforms until the last moment, causing passengers to run to the platform en masse. If platforms were announced earlier, the crush risk would be seriously mitigated.
The obvious next question is whether platforms _can_ be announced earlier - to which the answer is, as I understand it, yes. The platforms are known about much further in advance and the reason for the delay appears to be a combination of intransigence by Euston management and a lack of sufficient ticket gateline staff by the train operators.
We're really looking forward to Windows support - I don't think any Actions runners vendors supports Windows at the moment and we were looking at building our own runners as a result, but if this launches soon, we'd be very keen to try it out!
Yup. At the end of the day these logic-bomb-esque mechanisms are unpreventable and just a cat-and-mouse problem.
There should be a way to battle this outside technical measures, like a crowdsourced group of real distributed humans testing apps for anything malicious.
You can detect both the triggered behavior and "hey this looks like a logic bomb" with static analysis. Yes, you'll never trigger this with some dynamic analysis of the app. But "hey, some code that does things associated with malicious or otherwise bad behavior is guarded behind branches that check for specific responses from the app developer's server" is often enough to raise your eyebrows at something.
In this case suspicious code is anything that achieves a fairly narrow subset of possible outcomes so I doubt it would come up much.
It’s a common fallacy to assume infinite worlds result in every possible world but 1, 10, 100, … is an infinite series but is only covering ~0% of possibilities.
Yeah. It's really really hard to prevent actors from coming up with clever ways to circumvent the automatic checks. But that just means that apple needs to play the cat-and-mouse game. That's what they always say their cut is for, no?
One of the reasons that Apple does a bit of due diligence during the onboarding of a developer and establishing a developer agreement is to ensure they can reliably take legal action against developers that abuse the system.
The possibility of being banned from the Apple App Store ecosystem and/or legal reprisals is one way to deter unwanted behavior that can't be blocked through technical means.
I think AI is close to the point where synthetic users will be indistinguishable from real ones. To mitigate the techniques above and elsewhere in this thread, I think Apple will quickly move towards making the review process both highly automated and continuous.
It really means we need to lean into what ecosystem and government pressure we can apply to ensure the terms are sensible and fair because it will become nearly impossible to hack around them. (I do think many of these hacks are clever, I just don't think they will be enduring.)
I find your comment overly optimistic. Review processes are already highly automated and effective, but even as they advance there will always (short of AGI) be (1) effective tricks to mask behavior from analysis and (2) a need for a human in the loop to verify findings.
I do agree that we need to continue to apply pressure to tear down walls around the garden as a means of protecting code as speech, including the ability to distribute and run it on our devices without burden.
Unless, as you suggest, they take over Google Images but not text search results? I could believe that I use Image search sufficiently rarely that I wouldn't have seen a Pinterest result.