Currently, if we fail connecting to a new networking during welcome, we
send no error message to the client, and the connection remains open in
an undefined state.
Given the input:
NICK nick
USER user/invalid.xyz s e r
PASS pass
soju will fail to connect, add a message to its own logs, but will
return no message to the downstream.
This fixes the issue by forwarding the error message if it is an IRC
error message (which it is for connecting to new networks).
We should probably also close the connection after the message is
written, because it leaves the connection in an undefined state. This is
TODO for now because we'd have to wait for the error message to be
written out first, which is non-trivial.
When fetching messages via draft/chathistory from a conversation
with another user, soju would send the following:
:sender PRIVMSG sender :hey
instead of
:sender PRIVMSG recipient :hey
because the file-system message store format doesn't contain the
original PRIVMSG target.
Fix this by doing some guesswork.
READ lets downstream clients share information between each other about
what messages have been read by other downstreams.
Each target/entity has an optional corresponding read receipt, which is
stored as a timestamp.
- When a downstream sends:
READ #chan timestamp=2020-01-01T01:23:45.000Z
the read receipt for that target is set to that date
- soju sends READ to downstreams:
- on JOIN, if the client uses the soju.im/read capability
- when the read receipt timestamp is set by any downstream
The read receipt date is clamped by the previous receipt date and the
current time.
Right now there is no consistent ordering in the network list:
no ORDER BY in the DB, and network updates move entries to the end.
Let's always sort by network ID so that users don't see the entries
move around.
I've contemplated sorting by Network.GetName() instead, but:
- Clients have now way to figure out dynamic order changes, e.g.
when renaming a network.
- Some clients might use ISUPPORT NETWORK when a user hasn't
explicitly named a network, but soju won't use that for ordering,
leading to non-alphabetic ordering in the client.
Let's leave it to clients to sort the networks by display name if
they want to.
If a client queues a high number of commands and then disconnects,
remove all of the pending commands. This avoids unnecessarily
sending commands whose results won't be used.
The first reconnection attempt waits for 1min, the second the 2min,
and so on up to 10min. There's a 1min jitter so that multiple failed
connections don't try to reconnect at the exact same time.
Closes: https://todo.sr.ht/~emersion/soju/161
Once the downstream connection has logged in with their bouncer
credentials, allow them to issue more SASL auths which will be
redirected to the upstream network. This allows downstream clients
to provide UIs to login to transparently login to upstream networks.
Add support for MONITOR in single-upstream mode.
Each downstream has its own set of monitored targets. These sets
are merged together to compute the MONITOR commands to send to
upstream.
Each upstream has a set of monitored targets accepted by the server
alongside with their status (online/offline). This is used to
directly send replies to downstreams adding a target another
downstream has already added, and send MONITOR S[TATUS] replies.
Co-authored-by: delthas <delthas@dille.cc>
This has the following upsides:
- We can now routes WHO replies to the correct client, without
broadcasting them to everybody.
- We are less likely to hit server rate limits when multiple downstreams
are issuing WHO commands at the same time.
The message stores don't need to access the internal network
struct, they just need network metadata such as ID and name.
This can ease moving message stores into a separate package in the
future.
This is a mecanical change, which just lifts up the context.TODO()
calls from inside the DB implementations to the callers.
Future work involves properly wiring up the contexts when it makes
sense.
This allows users to set a default realname used if the per-network
realname isn't set.
A new "user update" command is introduced and can be extended to edit
other user properties and other users in the future.
Typically done via:
/notice $<bouncer> <message>
Or, for a connection not bound to a specific network:
/notice $* <message>
The message is broadcast as BouncerServ, because that's the only
user that can be trusted to belong to the bouncer by users. Any
other prefix would conflict with the upstream network.
Instead of ignoring detached channels wehn replaying backlog,
process them as usual and relay messages as BouncerServ NOTICEs
if necessary. Advance the delivery receipts as if the channel was
attached.
Closes: https://todo.sr.ht/~emersion/soju/98
TL;DR: supports for casemapping, now logs are saved in
casemapped/canonical/tolower form
(eg. in the #channel directory instead of #Channel... or something)
== What is casemapping? ==
see <https://modern.ircdocs.horse/#casemapping-parameter>
== Casemapping and multi-upstream ==
Since each upstream does not necessarily use the same casemapping, and
since casemappings cannot coexist [0],
1. soju must also update the database accordingly to upstreams'
casemapping, otherwise it will end up inconsistent,
2. soju must "normalize" entity names and expose only one casemapping
that is a subset of all supported casemappings (here, ascii).
[0] On some upstreams, "emersion[m]" and "emersion{m}" refer to the same
user (upstreams that advertise rfc1459 for example), while on others
(upstreams that advertise ascii) they don't.
Once upstream's casemapping is known (default to rfc1459), entity names
in map keys are made into casemapped form, for upstreamConn,
upstreamChannel and network.
downstreamConn advertises "CASEMAPPING=ascii", and always casemap map
keys with ascii.
Some functions require the caller to casemap their argument (to avoid
needless calls to casemapping functions).
== Message forwarding and casemapping ==
downstream message handling (joins and parts basically):
When relaying entity names from downstreams to upstreams, soju uses the
upstream casemapping, in order to not get in the way of the user. This
does not brings any issue, as long as soju replies with the ascii
casemapping in mind (solves point 1.).
marshalEntity/marshalUserPrefix:
When relaying entity names from upstreams with non-ascii casemappings,
soju *partially* casemap them: it only change the case of characters
which are not ascii letters. ASCII case is thus kept intact, while
special symbols like []{} are the same every time soju sends them to
downstreams (solves point 2.).
== Casemapping changes ==
Casemapping changes are not fully supported by this patch and will
result in loss of history. This is a limitation of the protocol and
should be solved by the RENAME spec.
... so that the JOIN/history batch takes into account all capabilities.
Without this commit for example, enabling multi-prefix after the batch
makes the client send NAMES requests for all channels, which generate
needless traffic.
This uses the fields added previously to the Channel struct to implement
the actual detaching/reattaching/relaying logic.
The `FilterDefault` values of the messages filters are currently
hardcoded.
The values of the message filters are not currently user-settable.
This introduces a new user event, eventChannelDetach, which stores an
upstreamConn (which might become invalid at the time of processing), and
a channel name, used for auto-detaching. Every time the channel detach
timer is refreshed (by receveing a message, etc.), a new timer is
created on the upstreamChannel, which will dispatch this event after the
duration (and discards the previous timer, if any).
Introduce a messageStore type, which will allow for multiple
implementations (e.g. in the DB or in-memory instead of on-disk).
The message store is per-user so that we don't need to deal with locking
and it's easier to implement per-user limits.
Instead, always read chat history from logs. Unify the implicit chat
history (pushing history to clients) and explicit chat history
(via the CHATHISTORY command).
Instead of keeping track of ring buffer cursors for each client, use
message IDs.
If necessary, the ring buffer could be re-introduced behind a
common MessageStore interface (could be useful when on-disk logs are
disabled).
References: https://todo.sr.ht/~emersion/soju/80
Keep the ring buffer alive even if all clients are connected. Keep the
ID of the latest delivered message even for online clients.
As-is, this is a net downgrade: memory usage increases because ring
buffers aren't free'd anymore. However upcoming commits will replace the
ring buffer with log files. This change makes reading from log files
easier.
For Network and Channel, the database only needed to define one Store
operation to create/update a record. However since User is missing an ID
we couldn't have a single StoreUser function like other types. We had
CreateUser and UpdatePassword. As new User fields get added (e.g. the
upcoming Admin flag) this isn't sustainable.
We could have CreateUser and UpdateUser, but this wouldn't be consistent
with other types. Instead, introduce User.Created which indicates
whether the record is already stored in the DB. This can be used in a
new StoreUser function to decide whether we need to UPDATE or INSERT
without relying on SQL constraints and INSERT OR UPDATE.
The ListUsers and GetUser functions set User.Created to true.
The user.updateNetwork function is a bit involved because we need to
make sure that the upstream connection is closed before re-connecting
(would otherwise cause "Nick already used" errors) and that the
downstream connections' state is kept in sync.
References: https://todo.sr.ht/~emersion/soju/17
Previously, the downstream nick was never changed, even when the
downstream sent a NICK message or was in single-server mode with a
different nick.
This adds support for updating the downstream nick in the following
cases:
- when a downstream sends NICK
- additionally, in single-server mode:
- when a downstream connects and its single network is connected
- when an upstream connects
- when an upstream sends NICK
Channels can now be detached by leaving them with the reason "detach",
and re-attached by joining them again. Upon detaching the channel is
no longer forwarded to downstream connections. Upon re-attaching the
history buffer is sent.
This fixes a serious bug added in 276ce12e, where in newNetwork all
channels point to the same channel, which causes soju to only join a
single channel when connecting to an upstream network.
This also adds the same kind of reassignment of a for loop variable in
user.run(), even though that function currently works correctly, as a
sanity improvement in case this function is changed in the future.
This makes use of cap-notify to dynamically advertise support for
away-notify. away-notify is advertised to downstream connections if all
upstreams support it.
Store the list of configured channels in the network data structure.
This removes the need for a database lookup and will be useful for
detached channels.
Instead of having one ring buffer per network, each network has one ring
buffer per entity (channel or nick). This allows history to be more
fair: if there's a lot of activity in a channel, it won't prune activity
in other channels.
We now track history sequence numbers per client and per network in
networkHistory. The overall list of offline clients is still tracked in
network.offlineClients.
When all clients have received history, the ring buffer can be released.
In the future, we should get rid of too-old offline clients to avoid
having to maintain history for them forever. We should also add a
per-user limit on the number of ring buffers.