~dr|z3d
@RN
@RN_
@StormyCloud
@eyedeekay
@orignal
@postman
@zzz
%Liorar
+FreefallHeavens
+Xeha
+bak83_
+cumlord
+hk
+poriori
+profetikla
+uop23ip
Arch
DeltaOreo
FreeRider
Irc2PGuest10850
Irc2PGuest19353
Irc2PGuest23854
Irc2PGuest46029
Irc2PGuest48064
Meow
Nausicaa
Onn4l7h
Onn4|7h
Over
T3s|4__
acetone_
anon4
anu
boonst
enoxa
mareki2pb
mittwerk
plap
shiver_
simprelay
solidx66
u5657_1
weko_
RN
LOL
T3s|4
o/ dr|z3d - on -37+ now; minor thing, but on -36+ (and several earlier) my uptime clock is refreshing correctly, but the i2p+ d/l status progress bar is not. Only a soft refresh does generate the updated d/l status. I'll let you know if I see this when you push the next build
zzz
persistence is soooo messy... I need to talk it out with somebody
orignal
what's happened?
zzz
nothing, just brainstorming
orignal
the problem is that idk can't replace you
orignal
at least at this point
orignal
he probably needs more time to get in
zzz
not talking about that. thinking about HTTP proxy persistent connections
orignal
what's with it? does it affect me?
orignal
I'm reading the log
zzz
it would be a nice improvement but not easy
zzz
what are you working on?
orignal
on nothing )))
orignal
well there are few minor outstanding issues
zzz
would you like to take a look at my fix for bitcoin's SAM accept problem?
zzz
where they got in a loop after the laptop went to sleep and came back
orignal
show me
orignal
I think I always use one tme acceptors
zzz
so our bug was we would always send OK after STREAM ACCEPT. Then we would immediately send I2P_ERROR after that. They tried to parse I2P_ERROR as the b64 destination
zzz
and die and infinite loop
zzz
and the control socket stayed up even though the dest was dead
zzz
I think you said you don't have the same problem
orignal
my implemnetation is different
zzz
the test results of my fix are at the bottom of git.idk.i2p/i2p-hackers/i2p.i2p/-/issues/399
orignal
but I will chech the scenario
zzz
so with the fix, most of the time we won't send an I2P_ERROR after OK, but it could still happen if things get closed while you're waiting for a socket
zzz
we always closed the acceptors when the control socket closed, but we didn't handle the other case, where the acceptor finds out first
orignal
I still don't understand the secenario. You have an acceptor accepting or not
orignal
if it's accepting your receive new stream socket and wait for next
orignal
if you close command socket you close acceptors in destructor
orignal
oor whatever you use instead
zzz
yes bitcoin is accepting but java SAM side is dead. But java SAM sent both OK and then I2P_ERROR instead of b64 dest
orignal
java SAM side means not link thorugh I2CP?
zzz
so bitcoin failed b64 decode and did another accept, infinite loop
orignal
oh you measn STREAM ACCEPT?
orignal
not TCP sockst accept
zzz
right, the I2CP side was closed, but java SAM still sent OK after STREAM ACCEPT
zzz
> HELLO VERSION
orignal
so, they send "STREAM ACCEPT"
zzz
< HELLO VERSION
zzz
> STREAM ACCEPT
zzz
< OK
zzz
< I2P_ERROR
orignal
instead putting into queue you send error
orignal
they don't honor error and start STREAM ACCEPT again
orignal
right?
zzz
right because I2CP is dead but we didn't check until after sending OK
zzz
right
orignal
but what's the actual issue?
orignal
they will keep sending until your I2CP is up
orignal
or just queue up
zzz
yeah but they didn't delay after error, so it was a thousand a second. And we never closed control socket so they kept looping
orignal
but it's thier issue not yours
orignal
if they don't handle error properly and tring to reconnect immeditely
orignal
not after at least a second delay
zzz
not 100% because we didn't close control socket, and specs didn't say we could send error instead of dest after OK
zzz
so that's what my fix does :)
zzz
and bitcoin is adding a delay after error
zzz
so we're both fixing it
orignal
I will check my logic
orignal
usually I queue up accepts
zzz
yeah we queue accepts but only if I2CP is up
orignal
as I know I don't have I2CP in the middle
orignal
what I really need to do is to limit queue size
zzz
problem was making sure everything on the sam side is shut down correctly when i2cp goes away
zzz
don't know about limits, if a client wants to do a million concurrent accepts, he's only killing himself
zzz
lets look and see if we have limits...
orignal
I think not more than 10
zzz
ok, we don't "queue" accepts, we queue incoming SYNs. The accepts all just wait on the head of the queue using java synchronization
zzz
since the SYN queue could be blown up by a remote attacker, we limit it to 64
zzz
and drop any SYNs not accept()ed in 3 seconds
zzz
so there's no limit on concurrent accept()s
orignal
I means limit for concurent "STREAM ACCEPT" requests
orignal
client might send next while previous is still pending
zzz
we have no limit for concurrent STREAM ACCEPTs
orignal
that's not good
orignal
as you said before nothing should be unlimited
zzz
sure, except that if a local client wants to kill his own router, that's not a huge problem
zzz
important thing is to limit the SYN queue
orignal
they might kill it unintenially due to a bug like in case of bitcoin
zzz
but you're right, bitcoin and prestium sometimes do dumb things on the client side
zzz
like create a thousand tunnels ((
zzz
SYN flooding is the real threat though. That's the whole Tor proof-of-work stuff
Opicaak
Prestium doesn't do dumb things.
Opicaak
It does what is required.
Opicaak
Not my problem i2pd doesn't implement dynamic tunnel pools.
Opicaak
Prestium uses about 50-60 tunnels at the moment.
Opicaak
Not thousands.
orignal
what is dynamic tunnel pool?
Opicaak
Increase tunnels based on usage, bandwidth or leasesets.
orignal
it can be implemented
orignal
if needed
Opicaak
That would be really awesome.
Opicaak
It would drop tunnel usage in Prestium drastically.
orignal
up to 16 I guess?
Opicaak
Yes, that's fine.
zzz
you're referring to this i2pd ticket: github.com/PurpleI2P/i2pd/issues/1831
dr|z3d
what issues are you bumping into with persistent connections, zzz?
dr|z3d
dynamic tunnels is something we've discussed previously. increment/decrement tunnels based on connections and/or bandwidth usage.
dr|z3d
simplest solution is to allocate tunnels based on connections, which is what snark already does.
zzz
yeah but even simpler is to start with supporting our 2009-era options first
zzz
ok persistence
zzz
we have 2 proxies and 3 sockets:
zzz
browser -- skt -- cl.proxy -- i2pskt -- srv.proxy -- skt -- server
zzz
the sockets are now all bound together 1:1:1, all for one req/resp each, they all get opened and closed together
zzz
so great, let's just make them all keepalive (persistent), keep them all bound together
zzz
sounds simple right?
dr|z3d
so far, so simple.. what's the catch?
zzz
its a big one
zzz
the persistence property is PER HOP.
zzz
in theory each hop could be one-shot or persistence
zzz
but that's ok, right, let's just ignore that and they're all one-shot or all persisted, keep them all bound together
zzz
now you're f'ed
dr|z3d
well, you don't let it, do you?
dr|z3d
you make sure that persistent sockets are bound to tlds, no?
zzz
the browser won't. It's just worried about its connection to the proxy
zzz
it's the proxy's problem
zzz
so ok, if they're all independent hops, let's just make the i2psocket persistent. Thats the only one that matters for performance, right? Who cares about persistent local sockets?
dr|z3d
this sounds like it's about to get circular, or recursive :)
zzz
but then, the browser isn't waiting for one to complete before it asks for the next one, because it's not doing persistence
zzz
so you get say requests on 6 different local sockets, then you have to have a 'pool' of idle i2pskts, look for one, but there probably isn't one
dr|z3d
so if we were addressing multiple proxies at the same time, one per tld, would that help?
zzz
sure you could do some multiplexing but not sure that helps
zzz
so maybe the browser skt and the i2pskt are keepalive, but we leave the server-side skt one-shot? would that help?
dr|z3d
just wondering if the browser wants a single socket per proxy, then having multiple proxies might somewhat alleviate things, in theory at least.
zzz
how would you tell the browser how to do that?
dr|z3d
well, this is the question.
zzz
or foxyproxy? a differnt port for each hostname? or host:port?
zzz
and let's not even think about the outproxy which is a 4th hop
dr|z3d
not viable. not per hostname.
dr|z3d
and you'd probably need a fairly big pool of proxies if you want to ensure you're not interrupting a socket.
dr|z3d
and then you've got the issue of multiple domains on a single site.
zzz
you might think that switching hosts would be rare but it's gonna do it every time on a redirect to a different host
dr|z3d
not rare, all too frequent. cdns, 3rd party tracking sites..
zzz
the idle sockets will probably expire and get closed pretty quick... 15 sec, 30, something like that
dr|z3d
you remember that part of the mozilla document that suggested http/2 was a better solution to the problem we're discussing? it's supported in jetty9, and apparently can be made to work without the https requirement. if persistence seems like it's too much work..
zzz
the good news about hop-by-hop is the hops could be implemented separately; doesn't have to be done all at once
zzz
the browser socket side is pretty easy, it wouldn't buy much of anything but might make us smarter
zzz
that's for pipelining. I'm just talking persistence
zzz
for complexity, persistence << pipelining <<<<<<< http/2
dr|z3d
yeah, forget pipelining, it's more or less deprecated.
dr|z3d
as for http/2, can't we get jetty to do the heavy lifting?
dr|z3d
I mean, it's already implemented.
zzz
not as a proxy
dr|z3d
that's a good point.
dr|z3d
proxies are currently 1.1
zzz
so, for each hop that we want persistence on, we have to monitor the data going through, so we know when the content ends
zzz
if there's a content-length header, we count that many bytes
dr|z3d
you have either content-length, or chunked. not both.
dr|z3d
if a chunked connection's sending a content-length header, it's doing it wrong.
zzz
if it's chunked encoding, we have to eyeball the chunks on the fly, waiting for the end marker
zzz
and then when done, don't close the socket, but hand it back to somebody to reuse
zzz
if there's no length or chunks, or if it's http/1.0, or if the other end said connetion: close, or about 6 other rules, then you can't reuse it
zzz
right. but having neither is allowed
zzz
then you just wait for the socket to get closed and hope that was all of it
dr|z3d
can I just take a step back and ask what you're hoping this buys us in terms of performance gains?
zzz
the streaming syn/synack and overhead is around 1kb total iirc
zzz
compared to just a few bytes for TCP
zzz
long-lived conns also gives you optimal window sizes
zzz
there's also some latency savings but not as much as TCP because streaming is zero-RTT
Opicaak
re: i2pd ticket. Kind of, but instead of reduceOnIdle, it would be increaseOnUsage(?), or as I've proposed the other day "(inbound|outbound).quantity = dynamic."
dr|z3d
that sounds good in theory, zzz, but we're really talking about the end user here, and I get the sense that .5s worth of improvement loading a page isn't really going to register on the dial.
zzz
yeah Opicaak I get it, I'd still recommend they implement the well-specified options first before inventing new ones
dr|z3d
biab
Opicaak
Understood, zzz.
zzz
dr|z3d, aren't there studies that even 50 ms is perceptible and causes users to think a site is slow?
zzz
Opicaak, the hard part to desigining new options is spec'ing them out sanely and how they interoperate with existing options
zzz
'dynamic' just throws it over to the implementer
zzz
we do have that implemented in i2psnark but all by magic on a very specific class of traffic
Opicaak
What I had in mind is something like this: "create 1 tunnel and increase based on the number of leasesets." That was the initial idea for this feature, then dr|z3d commented that it was proposed in the past, and the metric could be bandwidth instead of leasesets.
Opicaak
Perhaps dynamic could mean 2 tunnels/leaseset for better stability.
eyedeekay
Interacting with a multiplexing http proxy is a feature of I2PIPB, normally it isn't used but if you hook Firefox+I2PIPB to sam-forwarder-eeproxy sam-forwarder-browserproxy or si-i2p-plugin it will pass an isolating key that sets up a new socket per isolating key then the http proxy only uses that socket for that isolating key. One mode of operation always treats the hostname as the isolating key. Originally
eyedeekay
it was intended as a privacy enhancement so that each isolating key would have different x-i2p-* headers but it sounds like it kinda does at least 2 of the things you guys have been talking about as well
eyedeekay
Without any new tunnel options
zzz
the x-i2p headers come from the server side so I don't see how any client-side thing changes that
zzz
what's an "isolating key"?
zzz
the client side doesn't see any x-i2p headers ever
eyedeekay
Each of these sockets is using different keys and the http proxy on the other side only serves to direct the traffic to the appropriate socket
zzz
what kind of "keys" ?
eyedeekay
Like crypto keys for the sockets
zzz
for the browser-to-http-client-proxy sockets?
eyedeekay
It goes: Browser adds information(isolating key) to header before sending it to proxy. Proxy reads isolating key and creates a new socket with fresh crypto keys for it or re uses an existing one. Proxy sends browser traffic to new socket associated with isolating key.
eyedeekay
Header is removed by proxy
zzz
oh this is for connecting to some arbitrary 'multiplexing proxy', not the i2p client proxy, correct?
eyedeekay
Yeah it is, I made some prototypes with SAM years ago and have a feature branch where I am messing with implementing browserproxy in java but it's not in any of the core projects
zzz
but the i2p client proxy does not support persistence. 1 request == 1 socket
eyedeekay
Ah I see, not helpful right now then
zzz
status quo is:
zzz
<zzz> we have 2 proxies and 3 sockets:
zzz
<zzz> browser -- skt -- cl.proxy -- i2pskt -- srv.proxy -- skt -- server
zzz
<zzz> the sockets are now all bound together 1:1:1, all for one req/resp each, they all get opened and closed together
zzz
so if we change that to add persistence, and break the 1:1:1 relationship then you gotta be careful about _which_ socket and _which_ proxy you're talking about
zzz
because we can't pretend it's one end-to-end socket and one proxy in the middle anymore
orignal
guys, I missed something
orignal
do you have a new release?
eyedeekay
No we've pushed back to November as of a couple days ago
orignal
or only i2pd is 0.9.60?
orignal
we agreed for the week of sept 18
orignal
then why did I change it 0.9.60?
eyedeekay
I guess because you missed the update. We postponed for the first time early in September and the second time 2 nights ago.
orignal
maybe
orignal
well then we have two "lastest" version now )))
eyedeekay
That's not ideal but we're also not ready to go with the new release yet. Might be worth doing the easy thing and re-releasing 2.3.0 with an updated API number in the meantime. Sorry I didn't make more of an effort to get you notified of the updated schedule, I'll try and make it right.
zzz
I don't recommend doing a release just for that. What's done is done, focus on a productive release.
orignal
ofc
orignal
was just curios
not_bob
As per previous discussin. I've had android devices catc fire.
not_bob
And I can't seem to type.
not_bob_afk
Also, before I leave. Whoever owns i2psurvey.i2p may update it, or remove it.
orignal
can you skip 0.9.60 in november release?
zzz
there's no technical reason we can't skip. you and eyedeekay just have to coordinate
zzz
especially when plans change :)
orignal
are you going to have 3 releases instead 4 in 2023?