~dr|z3d
@RN
@RN_
@StormyCloud
@T3s|4
@T3s|4_
@eyedeekay
@orignal
@postman
@zzz
%Liorar
%ardu
%cumlord
%snex
+FreefallHeavens
+Xeha
+bak83_
+hk
+poriori
+profetikla
+qend-irc2p
+r00tobo
+uop23ip
Arch
BubbRubb
Danny
DeltaOreo
FreeB
HowardPlayzOfAdmin1
Irc2PGuest5865
Meow
Onn4l7h
Onn4|7h
acetone_
anontor
evasiveStillness
mareki2p_
maylay_
not_bob_afk
not_human
pisslord
r00tobo[2]
shiver_
simprelay
solidx66
thetia
u5657
zer0bitz
onon_
zzz, dr|z3d, tell me how you process the situation with the loss of a large number of packages > 256 in Streaming. The client receives seqn 1, 2, 3 and then seqn 300, 301, 302, 303...
dr|z3d
I think we bump up the RTO, onon_, but have a look at git.skank.i2p/i2pplus/I2P.Plus/src/branch/master/apps/streaming/java/src/net/i2p/client/streaming/impl/Connection.java
onon_
I meant, the question is that the client cannot send the server with ACK more than 1 Byte Integer in 'Nack Count'. In your code, I do not see this restriction.
dr|z3d
zzz's the authority on this, but maybe git.skank.i2p/i2pplus/I2P.Plus/src/branch/master/apps/streaming/java/src/net/i2p/client/streaming/impl/Packet.java or git.skank.i2p/i2pplus/I2P.Plus/src/branch/master/apps/streaming/java/src/net/i2p/client/streaming/impl/PacketLocal.java provide some insights.
onon_
Well, we will wait for zzz
zzz
onon_, you can't have more nacks than the max window size. ours is 128. don't go above 256
onon_
dr|z3d nave 512
onon_
public static final int MAX_WINDOW_SIZE = 512;
onon_
In any case, tell me how your code will process the above situation.
onon_
With a loss of a large number of packets
onon_
Will he drop packets or save them, and what will send to the server
zzz
our max rcv window is 128, so anything over 128 we will drop and send a CHOKE. We will never send more than 128 NACKS
zzz
dr|z3d can answer what technical analysis and testing he did for 512
onon_
Can I ask you to change the logic of work a little? Now I see that in such a situation the client begins to send CHOKE for each package received. You need to change it to send CHOKE with a standard interval
onon_
Because if the client receives data at a speed of 1 MB/s, then in this case we get a burst in traffic, which can overflow the route
zzz
remember these are 1812 byte max packets, 1812*128 = 232KB in flight
zzz
CHOKE should be very rare, I'm not concerned with optimizing it
onon_
I just want to maintain compatibility so that the Java i2p can take an incoming flow from i2pd at high speed
zzz
whats the compatibility issue? I'm confused
onon_
i2pd can now send data at a speed of about 1MB/s. i2pd Client can normally receive it, but Java copes poorly.
zzz
how so?
onon_
i2pd has a window 512 now
zzz
well, for the reasons stated above, I recommend 256 max
zzz
I also rarely see > 100 in practice
zzz
if you get over 150 or so you probably have congestion control bugs
onon_
We need to increase data transfer speed for i2p
zzz
sure, faster is better
zzz
I don't know what you're asking for. A change in CHOKE sending interval? or something else?
onon_
No, i2pd has a good CC now.
onon_
> change in CHOKE sending interval?
onon_
Yes
zzz
that won't make anything go faster
onon_
In this case, the client sends a large number of packets to the outgoing tunnel, and a congestion is formed there.
zzz
you just sent me 512 packets 1812 bytes each, I can't send you 200 small CHOKE packets?
onon_
Incoming and outgoing routes have different throughput
zzz
spec: Each endpoint must maintain its own estimate of the far-end receive window, in either bytes or packets.
zzz
if you get a CHOKE after 128, maybe set your max tx window to 128
onon_
Perhaps I could not convey my thought
onon_
Let's get back to this later. Perhaps I will figure out how to explain this.
zzz
you have real-net test results of actual window sizes?
onon_
Try the latest version of i2pd as a server and a client.
zzz
I'm not trying anything )) You want to change things, please have some test results that explain why
onon_
What tests do you need?
onon_
Are these some special tests?
zzz
<zzz> you have real-net test results of actual window sizes?
zzz
also, how many CHOKEs in a row have you received?
onon_
From the Java client, I get Choke for each package in case of loss of a large amount of them. In i2pd, the client sends with a standard interval of 1/10 RTT
onon_
Or was the question how often the CHOKE happens during the test?
zzz
right but the burst of CHOKEs should only happen once. adjust your max window size
zzz
how many did you get?
onon_
Their number depends on the size of the window, of course
zzz
well, you told me it was too many, so how many was it?
onon_
I can’t name a specific number now.
zzz
ok. but we can't implement an infinite receive buffer. Also, a buffer of any size can overflow if the client side is slow to empty it
zzz
so you need to reduce your window size to x - 1 or something if you get a CHOKE with x packets outstanding
zzz
should have worked all this out before you increased to 512. Maybe drz did, maybe not
onon_
189 CHOKE ack's I recieve now
onon_
from java
onon_
burst
zzz
that's a lot ))
onon_
my win size be 339
zzz
I still think you have congestion control problems if you get that many. If you do slow start and congestion avoidance you should be increasing the window size gradually
onon_
I do so, there are no problems with CC.
onon_
CHOKE processing problem now
zzz
at what window size do you switch from slow start to congestion avoidance?
onon_
On the size of the window at which RTT begins to grow.
zzz
if you just go 1-2-...64-128-256-512 you're going to have these issues
zzz
(during slow start)
onon_
No, at the moment a limit is set to increase 12 per RTT
zzz
there's no way you should have gotten to 339 then
onon_
12+12+12....
zzz
yeah but my limit is 128 so how did you get to 339?
onon_
my limit is 512 now
zzz
and I believe the RFC recommendation is ONE per RTT. How did you decide on 12?
onon_
const int MAX_WINDOW_SIZE = 512;
onon_
const int MAX_WINDOW_SIZE_INC_PER_RTT = 12;
onon_
ONE per RTT is too slow
zzz
sounds like 12 is way too fast
onon_
No, 12 is also slowly
zzz
How did you decide on 12?
onon_
12 is an empirically selected value
zzz
also, I didn't understand your answer, when do you switch from slow start (double per RTT) to congestion avoidance (+12 per RTT) ??
onon_
The algorithm is simple. We increase the window until RTT begins to increase. We record this level and drop the window size to the necessary. And then for some time we hold this window size.
zzz
so you don't have slow start?
onon_
we have
zzz
what's your initial window size?
onon_
10
zzz
so during slow start you go 10-20-40-80 ... until the RTT increases?
onon_
At the moment, even slow start is limited in 12 packets
zzz
so you DONT have slow start
zzz
if you had slow start you wouldn't need 12/RTT which is insane
zzz
look, you think what you have is perfect, but I think you have major issues:
zzz
-no slow start
zzz
- 12/RTT is WAY too high
zzz
- CHOKE handling problems
zzz
- 512 max window obviously wasn't well-tested
zzz
you're firehosing things out and then thinking it's the receiver's fault not yours
onon_
I do not think this is perfect, it is just much better than in Java i2p
zzz
our initial window size is 3, we do slow start (doubling) up to 64, then +1/RTT after that
zzz
and using RTT for a backoff indication is bizarre
onon_
loss is better?
zzz
what's your metric for 'much better' ? you have any stats like average number of retransmissions?
zzz
I don't know what's better, but what's your criterion for 'better' ?
zzz
if you're faster but you retx everything four times, that's not better. Efficiency has to be a part of it
zzz
I've added some notes to i2p-projekt.i2p/en/docs/api/streaming based on the above conversation
onon_
No, I don't resend everything 4 times
onon_
We have too high random packetloss on our network. We only have an option to use RTT as a support.
zzz
ofc not, just an example
onon_
Otherwise java i2p will remain very slow
zzz
the question is, what does "better" mean to you?
onon_
faster
zzz
that's it? nothing about retransmission percentage, efficiency, number of acks, you don't care about any of that?
zzz
and I still don't believe you got to (339-12) window size without any packet loss, then at 339 you got 189 CHOKEs
zzz
are you reducing window size on loss or only on RTT increase?
onon_
I am care about everything from the above
onon_
now only on RTT
onon_
loss is dasabled
zzz
lololololol
zzz
that's why it's so broken
onon_
I send packets effectively, using pacing, therefore I do not have a large number of lost packets.
zzz
that's how you got to 339
onon_
In the above example, I had only 1 nack before CHOKE happened
zzz
1 is enough
onon_
it may be random packetloss
zzz
ignoring losses is a wild design choice, it's hard to offer any advice after finding that out
onon_
I did not ask for advice, I asked to make changes to your code
zzz
yeah but if you've thrown away every RFC and doing your own thing that's what took me two hours to understand
zzz
add slow start up to 64, reduce to +1/RTT, backoff on loss, and you won't blow out receive buffers by 189 packets
onon_
I think I need to find a way to determine that the client is java i2p and specially slow down the sending in this case.
zzz
and reduce the max win to 256 since, as you noted, the max nack count is 255
zzz
I don't recommend trying to identify client implementation type. I recommend +1/RTT after slow start, and not ignoring packet loss
zzz
you said you 'don't have a large number of lost packets'. Have any data? % retx?
onon_
I do not have ready -made statistics at the moment.
zzz
ok
zzz
I also question whether RTT is sensitive enough to use as your sole congestion indication, ignoring loss
onon_
You can see the work of the algorithm here if you are interested. 6woqj4si4zc4j6gyie63qcpnenuy7c5nukket53ayoe4wo4a5naa.b32.i2p/chart12.png
onon_
blue is win size
onon_
green is RTT
zzz
hops could start dropping at a queue delay of only a few 10's of ms
zzz
nice charts
zzz
looks to me like you're reacting way too slowly to RTT increase, as I said middle hops won't buffer too much before they start dropping
zzz
so your bw crashes
zzz
don't know what RFC or paper you based your design on, but looks to me like it needs some tweaks
onon_
As you can see, I stop increasing the window immediately after the start of RTT growth. Why don't I drop the window right away? Because it can be a single surge.
zzz
immediately? in chart12, RTT went from 200 to 400 before you stopped increasing, and RTT went from 200 to 700 before you reduced the window
zzz
don't know what the X time scale is but you're reacting way to slowly, and ignoring loss makes it much worse
onon_
We can detect the congestion on the network only after 1 RTT, when we get ACK with an increased delay. Not earlier.
zzz
whats the X scale?
zzz
point remains, RTT went up by 3.5X before you reduced window
onon_
Yes, this is a necessary delay to make sure that the overload is constant and not peak.
zzz
so that's the flaw with delay-based and ignoring losses. You're increasing way too fast and decreasing way too late
onon_
I do not want to argue with you
dr|z3d
you're right, you don't. :)
onon_
Your Loss-Based overload the transit nodes much more.
dr|z3d
I've reduced MAX_WINDOW_SIZE to 256 for now, let's see how we get on with that.
onon_
dr|z3d, Can you change your code so as not to send ACK to each package?
onon_
If CHOKE
zzz
what's the X scale on the graphs?
dr|z3d
If zzz's not convinced that pacing chokes is a good idea, I'm not going to argue.
onon_
There is not a linear scale, there are collected data upon receve ACK.
onon_
This means there 1/10 RTT
zzz
you have data or a theory for why loss-based overloads more?
zzz
or a pointer to some RFC or paper or spec your design is based on?
onon_
Yes, I understand that this is not your problem, but in i2PD there is still no RED. And when filling out the queue, he does not drop the packets.
zzz
I'm happy to read up on delay-based cong. control design if you give me a pointer, it sounds bananas to me
zzz
and +12/RTT and no slow start is definitely bananas
onon_
I do not suggest you use this algorithm.
onon_
In addition, I soon plan to redo it so that he works even faster.
zzz
wasn't considering it )) just looking for documentation to support your claims
onon_
no claims
zzz
"faster"; "doesn't overload transit nodes as much"; "do not have large number of lost packets"; "packets are sent effectively"; "much better than java i2p"; "no problems with CC"; "good CC now"
onon_
It's all true
onon_
Do what you think is right. I just want i2p to work quickly and users do not turn away from it due to the data transfer speed.
onon_
I2PD can work quickly, but most users use Java i2p. They will create an opinion that i2p is working slowly.
zzz
sure, java streaming can do better, esp. on reactions to loss
zzz
but you gotta turn down your firehose
zzz
suggest you make the following changes:
zzz
- reduce max win to 256
zzz
- implement loss-based slow-start up to 64 max
zzz
- reduce congestion avoidance phase growth rate from 12/RTT to 1/RTT
zzz
- research re-enabling loss detection during congestion avoidance
zzz
EOT
onon_
I heard you. Do not repeat
zzz
:) and react to RTT increase much quicker. I think codel starts dropping at 15 ms latency in the queue? but don't quote me, need to research it
onon_
When I2PD was just the way you want, it was very slow.
dr|z3d
what's a good choke rate send for you, onon_? Once every 200ms?
onon_
At the same speed as the usual ack
dr|z3d
private long lastChokeSendTime = 0; // class-level variable
dr|z3d
synchronized void setChoking(boolean choke) {
dr|z3d
long now = System.currentTimeMillis();
dr|z3d
// Only allow sending a choke message if 200ms has passed since last send
dr|z3d
if (choking != choke && (now - lastChokeSendTime >= 200)) {
dr|z3d
if (_log.shouldDebug()) {
dr|z3d
_log.debug("[" + peer + "] setChoking(" + choke + ")");
dr|z3d
}
dr|z3d
choking = choke;
dr|z3d
out.sendChoke(choke);
dr|z3d
lastChokeSendTime = now; // update last send time
dr|z3d
}
dr|z3d
}
zzz
like I said, I don't fully understand a delay-only cong. control design. I'm offering to go research it and get smarter and come back later, if you tell me what research or spec your design is based on
zzz
onon_, what was the time from first to last of the 189 CHOKEs?
onon_
2 sec
zzz
that's a lot faster than 5/sec
onon_
And yes, of course, do not send ACK for every missing packet
dr|z3d
zzz, every 200ms minimum for chokes, does that sound sane?
onon_
Perhaps I test on the old version, and in the new this is already fixed?
dr|z3d
the code above is what I'm proposing, it's not in the codebase.
zzz
1) he's asking for RTT/10, not 200 ms
zzz
2) not a code review but not clear that's the right place, it only happens if choke state changes
zzz
but this is like getting bombs dropped on you and shooting back rubber bands and getting complaints about too many rubber bands
dr|z3d
:)
dr|z3d
good analogy.