This adds a new field to upstreams, members, which is a casemapped map
of upstream users known to the soju. The upstream users known to soju
are: self, any monitored user, and any user with whom we share a
channel.
The information stored for each upstream user corresponds to the info
that can be returned by a WHO/WHOX command.
We build the upstream user information both incrementally, capturing
information contained in JOIN and AWAY messages; and with the bulk user
information contained in WHO replies we receive.
This lets us build a user cache that can then be used to return
synthetic WHO responses to later WHO requests by downstreams.
This is useful because some networks (eg Libera) heavily throttle WHO
commands, and without this cache, any downstream connecting would send 1
WHO command per channel, so possibly more than a dozen WHO commands,
which soju then forwarded to the upstream as WHO commands.
With this cache most WHO commands can be cached and avoid sending
WHO commands to the upstream.
In order to cache the "flags" field, we synthetize the field from user
info we get from incremental messages: away status (H/G) and bot status
(B). This could result in incorrect values for proprietary user fields.
Support for the server-operator status (*) is also not supported.
Of note is that it is difficult to obtain a user "connected server"
field incrementally, so clients that want to maximize their WHO cache
hit ratio can use WHOX to only request fields they need, and in
particular not include the server field flag.
Co-authored-by: delthas <delthas@dille.cc>
Previously, we would clear webpush targets after any MARKREAD.
Consider the following scenario (ignore any typos, this is crafted by
hand):
<<< @time=2020-01-01T00:00:00Z PRIVMSG #foo :hi mark!
<<< @time=2020-01-02T00:00:00Z PRIVMSG #foo :hi again mark!
>>> MARKREAD #foo timestamp=2020-01-01T00:00:00Z
>>> MARKREAD #foo timestamp=2020-01-02T00:00:00Z
The push target was previously cleared on the first MARKREAD, which
means that the second MARKREAD was never broadcast to Firebase, and all
devices would keep the "hi again mark!" notification indefinitely.
This changes the webpush target map so that we store a timestamp of the
last highlight we sent. We only clear the push target when sending a
MARKREAD that is at or after the last message.
The FS message store truncates message times to the second.
This means that a message sent out as 2020-01-01T00:00:00.123Z could be
sent later as part of a CHATHISTORY batch as 2020-01-01T00:00:00.000Z,
which could cause issues in clients.
One such issue is a client sending a MARKREAD for
2020-01-01T00:00:00.000Z, with another client considering the
2020-01-01T00:00:00.123Z message it has as unread.
This fixes the issue by truncating all message times to the second when
using the FS message store.
We already have logic to regain our desired nick when the upstream
server supports MONITOR. However some networks (e.g. OFTC, Rizon)
don't support MONITOR. Also try to regain our desired nick in that
case, by periodically sending NICK commands.
Closes: https://todo.sr.ht/~emersion/soju/197
The soju username is immutable. Add a separate nickname setting so
that users can change their nickname for all networks.
References: https://todo.sr.ht/~emersion/soju/110
Some WebPushSubscription entries aren't tried to a network, in
which case the "network" column is NULL. But then all users share
the same row. Oops.
Fortunately network-less subscriptions aren't used for anything
yet, they're just stored. So the impact should be minimal.
This fixes a serious bug where we stop executing forEachDownstream on
the first downstream that does not match the network. Instead we want to
simply continue; it's a basic filter.
Currently, if we fail connecting to a new networking during welcome, we
send no error message to the client, and the connection remains open in
an undefined state.
Given the input:
NICK nick
USER user/invalid.xyz s e r
PASS pass
soju will fail to connect, add a message to its own logs, but will
return no message to the downstream.
This fixes the issue by forwarding the error message if it is an IRC
error message (which it is for connecting to new networks).
We should probably also close the connection after the message is
written, because it leaves the connection in an undefined state. This is
TODO for now because we'd have to wait for the error message to be
written out first, which is non-trivial.
When fetching messages via draft/chathistory from a conversation
with another user, soju would send the following:
:sender PRIVMSG sender :hey
instead of
:sender PRIVMSG recipient :hey
because the file-system message store format doesn't contain the
original PRIVMSG target.
Fix this by doing some guesswork.
READ lets downstream clients share information between each other about
what messages have been read by other downstreams.
Each target/entity has an optional corresponding read receipt, which is
stored as a timestamp.
- When a downstream sends:
READ #chan timestamp=2020-01-01T01:23:45.000Z
the read receipt for that target is set to that date
- soju sends READ to downstreams:
- on JOIN, if the client uses the soju.im/read capability
- when the read receipt timestamp is set by any downstream
The read receipt date is clamped by the previous receipt date and the
current time.
Right now there is no consistent ordering in the network list:
no ORDER BY in the DB, and network updates move entries to the end.
Let's always sort by network ID so that users don't see the entries
move around.
I've contemplated sorting by Network.GetName() instead, but:
- Clients have now way to figure out dynamic order changes, e.g.
when renaming a network.
- Some clients might use ISUPPORT NETWORK when a user hasn't
explicitly named a network, but soju won't use that for ordering,
leading to non-alphabetic ordering in the client.
Let's leave it to clients to sort the networks by display name if
they want to.
If a client queues a high number of commands and then disconnects,
remove all of the pending commands. This avoids unnecessarily
sending commands whose results won't be used.
The first reconnection attempt waits for 1min, the second the 2min,
and so on up to 10min. There's a 1min jitter so that multiple failed
connections don't try to reconnect at the exact same time.
Closes: https://todo.sr.ht/~emersion/soju/161
Once the downstream connection has logged in with their bouncer
credentials, allow them to issue more SASL auths which will be
redirected to the upstream network. This allows downstream clients
to provide UIs to login to transparently login to upstream networks.