Channels can now be detached by leaving them with the reason "detach",
and re-attached by joining them again. Upon detaching the channel is
no longer forwarded to downstream connections. Upon re-attaching the
history buffer is sent.
This makes use of cap-notify to dynamically advertise support for
away-notify. away-notify is advertised to downstream connections if all
upstreams support it.
Some servers do not support TLS, or have invalid, expired or self-signed
TLS certificates. While the right fix would be toi contact each server
owner to add support for valid TLS, supporting plaintext upstream
connections is sometimes necessary.
This adds support for the irc+insecure address scheme, which connects to
a network in plain-text over TCP.
This is preparatory work for adding other connection types to upstream
servers. The service command `network create` now accepts a scheme in
the address flag, which specifies how to connect to the upstream server.
The only supported scheme for now is ircs, which is also the default if
no scheme is specified. ircs connects to a network over a TLS TCP
connection.
Store the list of configured channels in the network data structure.
This removes the need for a database lookup and will be useful for
detached channels.
Some servers use custom IRC bots with custom commands for registering to
specific services after connection.
This adds support for setting custom raw IRC messages, that will be
sent after registering to a network.
It also adds support for a custom flag.Value type for string
slice flags (flags taking several string values).
Instead of having one ring buffer per network, each network has one ring
buffer per entity (channel or nick). This allows history to be more
fair: if there's a lot of activity in a channel, it won't prune activity
in other channels.
We now track history sequence numbers per client and per network in
networkHistory. The overall list of offline clients is still tracked in
network.offlineClients.
When all clients have received history, the ring buffer can be released.
In the future, we should get rid of too-old offline clients to avoid
having to maintain history for them forever. We should also add a
per-user limit on the number of ring buffers.
In order to notify the user when we are disconnected from a network
(either due to an error, or due a QUIT), and when we fail reconnecting,
this commit adds support for sending a short NOTICE message from the
service user to all relevant downstreams.
The last error is stored, and cleared on successful connection, to
ensure that the user is *not* flooded with identical connection error
messages, which can often happen when a server is down.
No lock is needed on lastError because it is only read and modified from
the user goroutine.
Closes: https://todo.sr.ht/~emersion/soju/27
Any SendMessage call after Close could potentially block forever if the
outgoing channel was filled up. Now the channel is drained before the
writer goroutine exits.
Add bouncer logs, in a network/channel/date.log format, in a similar
manner to ZNC log module. PRIVMSG, JOIN, PART, QUIT, MODE are logged.
Add a config directive for the logs file, including a way to disable
them entirely.
This commit adds support for downstream LIST messages from multiple
concurrent downstreams to multiple concurrent upstreams, including
support for multiple pending LIST requests from the same downstream.
Because a unique RPL_LISTEND message must be sent to the requesting
downstream, and that there might be multiple upstreams, each sending
their own RPL_LISTEND, a cache of RPL_LISTEND replies of some sort is
required to match RPL_LISTEND together in order to only send one back
downstream.
This commit adds a list of "pending LIST" structs, which each contain a
map of all upstreams that yet need to send a RPL_LISTEND, and the
corresponding LIST request associated with that response. This list of
pending LISTs is sorted according to the order that the requesting
downstreams sent the LIST messages in. Each pending set also stores the
id of the requesting downstream, in order to only forward the replies to
it and no other downstream. (This is important because LIST replies can
typically amount to several thousands messages on large servers.)
When a single downstream makes multiple LIST requests, only the first
one will be immediately sent to the upstream servers. The next ones will
be buffered until the first one is completed. Distinct downstreams can
make concurrent LIST requests without any request buffering.
Each RPL_LIST message is forwarded to the downstream of the first
matching pending LIST struct.
When an upstream sends an RPL_LISTEND message, the upstream is removed
from the first matching pending LIST struct, but that message is not
immediately forwarded downstream. If there are no remaining pending LIST
requests in that struct is then empty, that means all upstreams have
sent back all their RPL_LISTEND replies (which means they also sent all
their RPL_LIST replies); so a unique RPL_LISTEND is sent to downstream
and that pending LIST set is removed from the cache.
Upstreams are removed from the pending LIST structs in two other cases:
- when they are closed (to avoid stalling because of a disconnected
upstream that will never reply to the LIST message): they are removed
from all pending LIST structs
- when they reply with an ERR_UNKNOWNCOMMAND or RPL_TRYAGAIN LIST reply,
which is typically used when a user is not allowed to LIST because they
just joined the server: they are removed from the first pending LIST
struct, as if an RPL_LISTEND message was received