IRCaBot 2.1.0
GPLv3 © acetone, 2021-2022
#i2p-dev
/2022/12/24
@eyedeekay
&kytv
&zzz
+R4SAS
+RN
+RN_
+T3s|4
+dr|z3d
+hk
+orignal
+postman
+wodencafe
Arch
DeltaOreo
FreeRider
FreefallHeavens
Irc2PGuest15271
Irc2PGuest28511
Irc2PGuest64530
Irc2PGuest77854
Nausicaa
Onn4l7h
Onn4|7h
Over
Sisyphus
Sleepy
Soni
T3s|4_
Teeed
aargh3
acetone_
anon4
b3t4f4c3
bak83
boonst
cancername
cumlord
dr4wd3
eyedeekay_bnc
hagen_
khb_
not_bob_afk
plap
poriori
profetikla
r3med1tz-
rapidash
shiver_1
solidx66
u5657
uop23ip
w8rabbit
weko_
x74a6
weko what you think about this? red - num of transit tunnels, green - traffic, blue and yellow - num of transports
weko p.s. this isnt my router
weko bandwidth P
dr|z3d in line with what we've been observing on the network lately, weko.
weko dr|z3d: you think this retroshare or attack?
dr|z3d I don't know for sure, weko, but it's got all the characteristics of retroshare abuse.
dr|z3d what we've learnt with retroshare via that 333.i2p page I referenced before is that retroshare will continually hike tunnels and host ram usage until host ram and swap is exhausted or renders the host unusable. user then restarts i2p, and round 2.
weko but why so big num of tunnels
weko user spam on only one router?
dr|z3d because retroshare using BOB is buggy. seems to want to create a new tunnel for every file it's sharing.
dr|z3d and it's not one router, you'll probably find that network-wide the tunnel requests ramp.
weko and user have milions of tunnels in one time?
dr|z3d 10s of thousands, sure.
weko client tunneles*
weko and traffic by same reason?
dr|z3d all you need is a couple of buggy retroshares on the network exchanging files and you've got a huge mess.
dr|z3d > Client Tunnels: 18670 Transit Tunnels: 0
dr|z3d you did read that forum thread I posted? 333.i2p/topics/201
weko yes i did read
weko so big ramp for this by my opinion. because client tunnels choice rundom tunnles from local netDb
weko maybe local netDb small for this user
weko but this ramp have 6000 transit, and yse sent for example 18000 tunnels
weko 3 routers for local netDb, not realistic
dr|z3d well, you've got a few choices currently. limit your transit tunnels in i2pd, or run i2p+/i2p with the additional protections they provide.
dr|z3d I've seen it ramp from 6K to 14K in a couple of minutes.
weko yes i also seen ramp from 5000 to 10000 for 10 minutes
weko This millions of tunnels
dr|z3d 17K part tunnels and counting.
weko dr|z3d: i dont need limit number of tunnels because this is not problem for i2pd (i set 65535)
weko i dont hear my cooler = no problem xD
weko but i think this may raise cycle: more tunnels -> more routers with maximum tunnels -> more tunnels refusals -> more tunnels
obscuratus weko: Probably the bigger problem is the drop in network-wide exploratory tunnel success that seems to accompany these spikes in participating tunnels.
weko this is by the same reason
obscuratus We don't have a full understanding of what's going on here, but that part is probably more worrisome.
weko more tunnels refusals -> smaller percent of succeses
weko this is the same problem
obscuratus It seems to be linked, but there's a big part of this picture we don't understand yet.
weko yes
weko i agree
weko but this definitly not just retroshare
obscuratus There was a poorly configured retroshare user who was a problem for a while. I'm not sure that's still a thing.
obscuratus If you follow the retroshare docs on how to connect to i2p, it's just a single server tunnel
weko +10000 routers for every i2p router (60000), = 600M tunnels. from one user 600M tunnels? not realistic
weko okay maybe every router with public ip. but anyway this big num for one user
weko or this many users, also not realistic for retroshare
weko maybe purpose for attacker - raise cycle