Follow

when cloudflare is down:
"haha losers that's what you get for using a centralized MitM-as-a-service"

when getting DDoSed:
"help! cloudflare pls save me from the bad guys"

btw. yes this happened, at previous job we got hit with an application-level DDoS of low bandwidth and around 1000 req/s which was 100x more than normal traffic and overwhelmed our shitty Django webapp.

Banning IPs / rate limiting didn't help because they kept coming with new ones. I'd somehow need to know the list of IPs ahead of time.

Caching would probably work until they figured out that the session cookie (ppl can log in) must be part of the cache key.

1/

I could've found an out-of-spec header they're sending and block based on that, and it would've worked until they figured that out too.

I could've made my own captcha / proof-of-work authentication thing, that'd probably work (unless the bot can run JS).

Or I could use CloudFlare which appears to do all of the above for me, with economies of scale, and experience greater than I have.

Or I could've accepted the downtime.

What would y'all have done?

2/2

Actually I'd love to play a cat-and-mouse game with a DDoS attacker, if I had enough time.

But only in a situation where losing is acceptable, and won't harm a bunch of innocent people (like a university providing "us" with free hosting, or users who depend on the site "we" run to get to the right high school).

@wolf480pl It's cool. Just want to make sure you know Cloudflare doesn't actually stop DDOS. If I need DDOS protection, I go run sobbing to OVH, or one of those other companies in that article, not Cloudflare.

@cy I've seen it stop a small application-level DDoS

@wolf480pl Seems to me running your own reverse proxy could do just as well. As in, just running nginx, and proxying the webapp through that. (Which is what Cloudflare does.) Plus you could use fail2ban then.
nginx.com/blog/mitigating-ddos <-- a CF'd site about how to not use CF...
Whatever you could've done, CF did work in that case. They're still trash.

@cy
>fail2ban

They had way too many IPs. Some individual IPs were generating a lot of traffic, but most of them did just 1 or 2 requests.

As I said in mstdn.io/@wolf480pl/1085145505
I could've done a lot of things, and those things are probably a large part of what CF does, but

1/2

@cy
but:

(1) outsourcing that to CF is easier than trying to figure it out myself,

(2) I couldn't find any of that advice at the time (like srsly, I searched, asked on fedi, none of it came up), and

(3) if the attackers did switch to a bandwidth-based attack method, the univeristy (which provided us with free hosting) would be probably mad at us as it'd affect the whole faculty.

If we had an ISP that offered to take care of the L4 and below, I'd probably tried to keep mitigating it myself.

@cy thanks for the link btw, it may come handy in the future

@cy I wish people hit by DDoS didn't feel out of options. I wish they didn't feel like CloudFlare is their most reliable option.

I wish I had more information at that time so that I could've avoided CloudFlare.

I wish there was a way to practice DDoS mitigation in a safe environment where the site being unavailable doesn't affect innocent kids' lives.

@wolf480pl A couple of years ago $work was hit with a DDoS. We phoned the service provider the servers were hosted at, and they assisted us in dealing with it.

Services were back up within the hour, because the ISP had experience with DDoS handling.

Today, I'd notify my provider, try to mitigate it myself, and accept the downtime if neither helps. But my business doesn't depend on my servers, so I can afford to do that.

(1/2)

@wolf480pl My personal experience with being on the receiving end of DDoS attacks (arguably, not a terribly lot of experience, but there were a couple of cases over the years) is that most often, they weren't very sophisticated.

Temporarily turning the pages static pretty much nullified half of them. Taking a 8 hour downtime nullified a lot of them too: "it's down, mission accomplished, lets sleep".

I found that attackers that are persistent, and who adapt, are rare.

But I'm a small fish.

@algernon we were hosted at a university, university sysadmins told us "we don't have means to handle it, just use cloudflare or sth"

but I guess you're right, the attacker probably wouldn't've adapted, and wouldn't've tried higher bandwidth, because we were small fish. And it was probably some kiddo being frustrated he didn't qualify for 2nd stage competition, buying time on the cheapest botnet around...

@wolf480pl Heh, yeah, university networks are like that. When my uni was DDoS'd back in the early 2000s, we just went belly up, unplugged from the internet for a few days, and called it done.

Most people didn't notice anything, because we had aggressive caching and lots of mirrors, so frequently accessed stuff was still accessible. We just reconfigured our proxies quickly to not expire anything for a while.

From outside, we were down. But that happened regularly anyway, so noone cared. =)

@algernon pre-pandemic that could've worked actually...

Like, move all deadlines to 12 AM, expect students to use university wifi to submit their assignments, isolate from the outside interent.

@algernon (not for us, running govt-endorsed coding compos for high-school kiddos, but for the rest of the university it would)

@wolf480pl
--[What would y'all have done?]--

Same thing I always do. Use the cloud thing as the *fallback* not the primary.

@sjb well the DNS record needs to point somewhere...

@wolf480pl Does DNS not have a "try this IP first, then this one if the first one is down" feature?

@sjb another problem is that if bots hit the non-cloudflare IP, they will oberwhelm the backend, making the "fallback" cloudflare IP not work too

@wolf480pl @sjb Modern browsers?

Balancing between the records has been a thing since Netscape.

@lanodan @sjb well I heard that some browsers only tried one, and if that failed, well, fail

@wolf480pl I am not that much into networking, but won’t simple rate-limiting help against app-level ddos?

@mburakov if you have a global rate-limit, you're cutting off legit users.
If you have a per-source-IP rate limit, that doesn't help if the attacker has more IPs than req/s..

@wolf480pl Most people don't need cloudflare, they just need to stop hosting wordpress on a LAMP stack with no caching whatsoever.

@lanodan let me introduce you to sio2, an online judge written in Python, that can sustain a whole 30 req/s given 8 vCPUs

@[email protected] If you're using the gratis tier at #Cloudflare, they often stop hosting your site until you start paying them when you do get a serious DDoS attack anyway. But since that is so incredibly rare, people hardly if ever will get to that point.

People use Cloudflare because of marketing, not because it actually provides any value.

@tyil at previous job, I got hit with a low-bandwidth 1000 req/s DDoS, less than 1 req/s per source IP. The webapp is shit so it couldn't handle it, but that was still 100x the normal load from legit users. CloudFlare was able to filter out those bots.

@[email protected]

The webapp is shit so it couldn't handle it

If only people would write decent fucking software for a change, they wouldn't need these increasingly stupid band-aids to keep things working. I don't see this as a point in favour of #Cloudflare, more as a point that most people shouldn't be allowed to write software at all.

@tyil well the software in question is written by students and volunteers, the non-profit foundation in charge doesn't have the resources to hire a full-time developer to rewrite it from scratch. I know it's a fucked-up situation but what would you do if you worked there, aside from quitting?

@[email protected] Being volunteer-ran or non-profit doesn't change anything. Don't let incompetents write software and then when it inevitably breaks, chose to harm your (apparently small) userbase's privacy and security by setting up a MitM to hide the fact that the application is absolute garbage.

Just take the hit, be honest to your users, and show a 500 error for a bit. There's no shame in going down and owning up to it. It's much more shameful that cheat your users into thinking you're doing a good job, while also sharing all their information with a third party that is known to fuck up.

@tyil guess you're right in principle, but it takes a lot of courage to do that

@[email protected] That is true. Nobody wants to admit they did something wrong, or made something that wasn't good. But it is the first step to properly improve things. And more often than not, your users will appreciate and understand you more if you're being honest with them. In my experience, the less technically inclined are very happy to just know you are working on things to make it better for them.

@tyil also that'd make the attacker feel good about themselves and brag, which would've made it possible to catch them

@[email protected] Heh, didn't even think about that. Could be a nice bonus, yeah.

@tyil @wolf480pl I see two more purposes to use Cloudflare:
* to hide the IP address (and so, the real location) of the server where the site is hosted;
* to be accessible via both IPv4 and IPv6 when the server has just one of them.

For the first purpose also Imperva can be used (but without IPv6). And the second purpose is especially actual for sites hosted on some (dirth-cheap) IPv6-only VPS.

#Cloudflare #IP #IPv4 #IPv6 #server #hosting #Imperva #Incapsula #cheap #dirthcheap #IPv6only #VPS

@[email protected] @[email protected] You can solve both at once with a $1/mo VPS. If you respect yourself, and more importantly, your users, you should definitely go with a simple and cheap VPS. If you do neither, #Cloudflare is obviously the best choice.

@tyil @wolf480pl First, such #VPS can be sometimes enough (or even more than enough) for the #site itself. Second, is there a sureness, that a cheap VPS would have a higher #uptime than #Cloudflare/#Imperva? Third, imagine that someone has five hobbies and so five sites (N.B.: #nonprofit) about these hobbies; it would be bad if search engines would notice that these five sites have the same #IP address; so, sixty dollars per year. Fourth, administering an additional #server would require a time.

@wolf480pl @gamliel @tyil Why would search engines care if multiple websites share the same IP address?

@ilyess @wolf480pl @tyil Some persons try to cheat the search engines. And search engines suspect a cheater in each site owner and webmaster. If few sites share the same IP address, I think search engines will try to decide if these sites are created by the same person or not (comparing the language, text style, design etc.).

And do not forget about the first, second and fourth points.

@[email protected] @[email protected]

Second, is there a sureness, that a cheap VPS would have a higher #uptime than #Cloudflare/#Imperva?

Yes. All my VPSes have better uptime than #Cloudflare. None of them were down this morning.

it would be bad if search engines would notice that these five sites have the same #IP address

Why would that be bad?

Fourth, administering an additional #server would require a time.

It's also extra time to set up Cloudflare or other DDoS protections, and when done manually through a VPS, its no more than 2 lines or so of iptables.

@wolf480pl cryto.net/~joepie91/blog/2016/
> you won't get any actual DDoS mitigation, even if you pay $200/month for their Business plan.
> They're not actually mitigating anything, it just so happens that they are the other side of the connection and thus "take the hit"!

@wolf480pl When running openings.moe i found that just spreading my load across two nginx boxes with rate limits and some caching did an excellent job

@lanodan @quad
pretty sure they also have some IP blacklists and heuristics for adding IPs to those blacklists. And a captcha for when they can't filter out enough automagically.

@wolf480pl @lanodan Yeah.

But honestly, how many sites are small enough to not be able to pay for a CDN service, but also targeted so hard that some rate limiting or QoS wouldn't do the job?

@quad @lanodan
the idea is someone else cares about keeping the config up to date with the latest botnets

@wolf480pl @lanodan I don't even think Cloudflare does. They just absorb most DDoSes with brute force
Sign in to participate in the conversation
Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!