IRCaBot 2.1.0
GPLv3 © acetone, 2021-2022
#i2p-dev
/2025/05/27
@eyedeekay
&zzz
+R4SAS
+RN
+StormyCloud
+acetone
+dr|z3d
+eche|off
+hagen
+hk
+mareki2p
+orignal
+postman
+qend-irc2p
+snex
+wodencafe
Arch
BubbRubb
Daddy_1
Danny
DeltaOreo
FreeRider
FreefallHeavens
HowardPlayzOfAdmin
Irc2PGuest18287
Irc2PGuest38625
Irc2PGuest89334
Onn4l7h
Onn4|7h
Sisyphus
Sleepy
SlippyJoe_
Teeed
ardu
b3t4f4c3___
bak83
cumlord
death
dr4wd3_
eyedeekay_bnc
f00b4r
not_bob_afk
onon_
poriori
profetikla
r00tobo_BNC
rapidash
shiver_1
solidx66
u5657
uop23ip
w8rabbit
weko_
wew
x74a6
zzz eyedeekay, gitea down
eyedeekay It was back by the time I looked at it
zzz ok, the message was a little different than usual
zzz remote:
zzz remote: error:
zzz remote: error: Internal Server Connection Error
zzz remote: error:
zzz error:
zzz error: Failed to execute git command
zzz error:
zzz remote: . Processing 1 references
zzz send-pack: unexpected disconnect while reading sideband packet
zzz error: error in sideband demultiplexer
eyedeekay Hm, that's new. All this stuff that's been seems to come down to a few locking issues in the interaction between git and gitea and instinctively, this seems like maybe related manifestation of that thing. I've been archiving the log every time it goes down to find it but most problems just go away after cleaning up the locks and restarting
eyedeekay The locking is also demonstrably the cause, malfunctions begin to occur when git operations compete for a lock
eyedeekay So now we're automatically cleaning locks every restart, and automatically restarting when lock competition causes a timeout, which takes about 2-5 minutes because it looks for false positives before killing/restarting and takes a backup, with restarts happening about 1-2 times a day, if you hit it in exactly that window which I think you did it probably acted weird
eyedeekay See if I can find you in the logs...
zzz no need, I leave it to you
eyedeekay Well I'm curious, and if I can find the event maybe it's another clue