Instead, always read chat history from logs. Unify the implicit chat
history (pushing history to clients) and explicit chat history
(via the CHATHISTORY command).
Instead of keeping track of ring buffer cursors for each client, use
message IDs.
If necessary, the ring buffer could be re-introduced behind a
common MessageStore interface (could be useful when on-disk logs are
disabled).
References: https://todo.sr.ht/~emersion/soju/80
Keep the ring buffer alive even if all clients are connected. Keep the
ID of the latest delivered message even for online clients.
As-is, this is a net downgrade: memory usage increases because ring
buffers aren't free'd anymore. However upcoming commits will replace the
ring buffer with log files. This change makes reading from log files
easier.
For Network and Channel, the database only needed to define one Store
operation to create/update a record. However since User is missing an ID
we couldn't have a single StoreUser function like other types. We had
CreateUser and UpdatePassword. As new User fields get added (e.g. the
upcoming Admin flag) this isn't sustainable.
We could have CreateUser and UpdateUser, but this wouldn't be consistent
with other types. Instead, introduce User.Created which indicates
whether the record is already stored in the DB. This can be used in a
new StoreUser function to decide whether we need to UPDATE or INSERT
without relying on SQL constraints and INSERT OR UPDATE.
The ListUsers and GetUser functions set User.Created to true.
The user.updateNetwork function is a bit involved because we need to
make sure that the upstream connection is closed before re-connecting
(would otherwise cause "Nick already used" errors) and that the
downstream connections' state is kept in sync.
References: https://todo.sr.ht/~emersion/soju/17
Previously, the downstream nick was never changed, even when the
downstream sent a NICK message or was in single-server mode with a
different nick.
This adds support for updating the downstream nick in the following
cases:
- when a downstream sends NICK
- additionally, in single-server mode:
- when a downstream connects and its single network is connected
- when an upstream connects
- when an upstream sends NICK
Channels can now be detached by leaving them with the reason "detach",
and re-attached by joining them again. Upon detaching the channel is
no longer forwarded to downstream connections. Upon re-attaching the
history buffer is sent.
This fixes a serious bug added in 276ce12e, where in newNetwork all
channels point to the same channel, which causes soju to only join a
single channel when connecting to an upstream network.
This also adds the same kind of reassignment of a for loop variable in
user.run(), even though that function currently works correctly, as a
sanity improvement in case this function is changed in the future.
This makes use of cap-notify to dynamically advertise support for
away-notify. away-notify is advertised to downstream connections if all
upstreams support it.
Store the list of configured channels in the network data structure.
This removes the need for a database lookup and will be useful for
detached channels.
Instead of having one ring buffer per network, each network has one ring
buffer per entity (channel or nick). This allows history to be more
fair: if there's a lot of activity in a channel, it won't prune activity
in other channels.
We now track history sequence numbers per client and per network in
networkHistory. The overall list of offline clients is still tracked in
network.offlineClients.
When all clients have received history, the ring buffer can be released.
In the future, we should get rid of too-old offline clients to avoid
having to maintain history for them forever. We should also add a
per-user limit on the number of ring buffers.
In order to notify the user when we are disconnected from a network
(either due to an error, or due a QUIT), and when we fail reconnecting,
this commit adds support for sending a short NOTICE message from the
service user to all relevant downstreams.
The last error is stored, and cleared on successful connection, to
ensure that the user is *not* flooded with identical connection error
messages, which can often happen when a server is down.
No lock is needed on lastError because it is only read and modified from
the user goroutine.
Closes: https://todo.sr.ht/~emersion/soju/27
This commit adds support for downstream LIST messages from multiple
concurrent downstreams to multiple concurrent upstreams, including
support for multiple pending LIST requests from the same downstream.
Because a unique RPL_LISTEND message must be sent to the requesting
downstream, and that there might be multiple upstreams, each sending
their own RPL_LISTEND, a cache of RPL_LISTEND replies of some sort is
required to match RPL_LISTEND together in order to only send one back
downstream.
This commit adds a list of "pending LIST" structs, which each contain a
map of all upstreams that yet need to send a RPL_LISTEND, and the
corresponding LIST request associated with that response. This list of
pending LISTs is sorted according to the order that the requesting
downstreams sent the LIST messages in. Each pending set also stores the
id of the requesting downstream, in order to only forward the replies to
it and no other downstream. (This is important because LIST replies can
typically amount to several thousands messages on large servers.)
When a single downstream makes multiple LIST requests, only the first
one will be immediately sent to the upstream servers. The next ones will
be buffered until the first one is completed. Distinct downstreams can
make concurrent LIST requests without any request buffering.
Each RPL_LIST message is forwarded to the downstream of the first
matching pending LIST struct.
When an upstream sends an RPL_LISTEND message, the upstream is removed
from the first matching pending LIST struct, but that message is not
immediately forwarded downstream. If there are no remaining pending LIST
requests in that struct is then empty, that means all upstreams have
sent back all their RPL_LISTEND replies (which means they also sent all
their RPL_LIST replies); so a unique RPL_LISTEND is sent to downstream
and that pending LIST set is removed from the cache.
Upstreams are removed from the pending LIST structs in two other cases:
- when they are closed (to avoid stalling because of a disconnected
upstream that will never reply to the LIST message): they are removed
from all pending LIST structs
- when they reply with an ERR_UNKNOWNCOMMAND or RPL_TRYAGAIN LIST reply,
which is typically used when a user is not allowed to LIST because they
just joined the server: they are removed from the first pending LIST
struct, as if an RPL_LISTEND message was received
Split user.register into two functions, one to make sure the user is
authenticated, the other to send our current state. This allows to get
rid of data races by doing the second part in the user goroutine.
Closes: https://todo.sr.ht/~emersion/soju/22
In a later commit, we'll be able to move part of downstreamConn.register
into the user goroutine to prevent races.
References: https://todo.sr.ht/~emersion/soju/22
This allows message handlers to read upstream/downstream connection
information without causing any race condition.
References: https://todo.sr.ht/~emersion/soju/1