This adds a new field to upstreams, members, which is a casemapped map
of upstream users known to the soju. The upstream users known to soju
are: self, any monitored user, and any user with whom we share a
channel.
The information stored for each upstream user corresponds to the info
that can be returned by a WHO/WHOX command.
We build the upstream user information both incrementally, capturing
information contained in JOIN and AWAY messages; and with the bulk user
information contained in WHO replies we receive.
This lets us build a user cache that can then be used to return
synthetic WHO responses to later WHO requests by downstreams.
This is useful because some networks (eg Libera) heavily throttle WHO
commands, and without this cache, any downstream connecting would send 1
WHO command per channel, so possibly more than a dozen WHO commands,
which soju then forwarded to the upstream as WHO commands.
With this cache most WHO commands can be cached and avoid sending
WHO commands to the upstream.
In order to cache the "flags" field, we synthetize the field from user
info we get from incremental messages: away status (H/G) and bot status
(B). This could result in incorrect values for proprietary user fields.
Support for the server-operator status (*) is also not supported.
Of note is that it is difficult to obtain a user "connected server"
field incrementally, so clients that want to maximize their WHO cache
hit ratio can use WHOX to only request fields they need, and in
particular not include the server field flag.
Co-authored-by: delthas <delthas@dille.cc>
Previously, we would clear webpush targets after any MARKREAD.
Consider the following scenario (ignore any typos, this is crafted by
hand):
<<< @time=2020-01-01T00:00:00Z PRIVMSG #foo :hi mark!
<<< @time=2020-01-02T00:00:00Z PRIVMSG #foo :hi again mark!
>>> MARKREAD #foo timestamp=2020-01-01T00:00:00Z
>>> MARKREAD #foo timestamp=2020-01-02T00:00:00Z
The push target was previously cleared on the first MARKREAD, which
means that the second MARKREAD was never broadcast to Firebase, and all
devices would keep the "hi again mark!" notification indefinitely.
This changes the webpush target map so that we store a timestamp of the
last highlight we sent. We only clear the push target when sending a
MARKREAD that is at or after the last message.
The FS message store truncates message times to the second.
This means that a message sent out as 2020-01-01T00:00:00.123Z could be
sent later as part of a CHATHISTORY batch as 2020-01-01T00:00:00.000Z,
which could cause issues in clients.
One such issue is a client sending a MARKREAD for
2020-01-01T00:00:00.000Z, with another client considering the
2020-01-01T00:00:00.123Z message it has as unread.
This fixes the issue by truncating all message times to the second when
using the FS message store.
Some clients will queue up multiple AUTHENTICATE commands without
waiting for a reply to avoid some roundtrips. However that means
the traffic looks like so:
AUTHENTICATE <mechanism>
AUTHENTICATE <base64 blob containing credentials>
soju will fail the first command, and will behave as if no SASL
authentication was in progress when interpreting the second one.
This means we'll echo back the security-sensitive base64 blob to
the client in the error message, which is definitely not great.
Stop doing that.
Previously, uc.network.Network.Nick wasn't successfully updated on
downstream NICK. This would cause soju to immediately switch back to the
old nick when the upstream supported MONITOR, so long as the network had
a nick configured as of initialization.
In addition, stop monitoring our desired nick once we've successfully
switched to it once, in order to not immediately undo server-induced
nick changes.
This makes it so any errors are only relayed to this downstream
connection.
The upstream handler for SETNAME handles the broadcasting to all
downstream connections already.
The soju username is immutable. Add a separate nickname setting so
that users can change their nickname for all networks.
References: https://todo.sr.ht/~emersion/soju/110
Some WebPushSubscription entries aren't tried to a network, in
which case the "network" column is NULL. But then all users share
the same row. Oops.
Fortunately network-less subscriptions aren't used for anything
yet, they're just stored. So the impact should be minimal.
On upstreams without message-tags support, we do not advertise
message-tags anymore. Still, we want to send the batch tag when the
client explicitly requested it.
This fixes a critical issue where we drop the batch tag on chathistory
messages for upstreams that do not support message-tags.
Previously, we would always advertise mesasge-tags. This made
downstreams believe they could send TAGMSG to the upstream, even though
the upstream did not support it.
This adds support for upstream echo-message. This capability is
enabled when the upstream supports labeled-response.
When it is enabled, we don't echo downstream messages in the downstream
handler, but rather wait for the upstream to echo it, to produce it to
downstreams.
When it is disabled, we keep the same behaviour as before: produce the
message to all downstreams as soon as it is received from the
downstream.
In other words, the main functional difference is that when the upstream
supports labeled-response, the client will now receive an echo for its
messages when the server acknowledges them, rather than when soju acks
them.
Additionally, uc.produce was refactored to take an ID rather than a
downstream.
When a client sends BOUNCER CHANGENETWORK with no value (or an empty
port value), this means it wants to reset the port value to its default
value.
Previously we considered an empty port as an actual valid, empty port
value, which would then be used to connect to the server (dial
'example.com:' (ie 'example.com:0'), which failed.
This avoids having more than one in flight at a time (avoids
hitting rate limits a bit) and routes back replies to the correct
downstream connection (even if labeled-response isn't supported).
Closes: https://todo.sr.ht/~emersion/soju/193
Just force-set the nickname and completely disregard what the client
sets during connection registration. Clients must discover their
effective nickname via RPL_WELCOME.
Most users will connect to their server with `<username>` as their
username in order to configure their upstreams.
Multi-upstream can be unintuitive to them and should not be enabled on
that first connection that is usually used for upstream configuration.
Multi-upstream is instead a power-user feature that should be explicitly
enabled with a specific network suffix.
We reserve the network suffix `*` and use it a special case to mean that
it requests multi-upstream mode.
Previously, when we sent READ for an upstream which was disconnected,
we would fail with an error. This is because we called unmarshalEntity,
which checked that the upstream was in the connected status.
But we don't need to be connected to update the READ timestamp, this is
a purely offline (wrt the upstream) operation.
This simply switches the call from unmarshalEntity to
unmarshalEntityNetwork to fix the issue.