Once the downstream connection has logged in with their bouncer
credentials, allow them to issue more SASL auths which will be
redirected to the upstream network. This allows downstream clients
to provide UIs to login to transparently login to upstream networks.
Implements the following recommendation from the spec:
> If the client completes registration (with CAP END, NICK, USER and any other
> necessary messages) while the SASL authentication is still in progress, the
> server SHOULD abort it and send a 906 numeric, then register the client
> without authentication.
The MOTD indicates the end of the registration's message burst, and
the server can send arbitrary messages before it.
Update the supported capabilities, the nick and the realname before
MOTD to make it so client logic that runs on MOTD can work with
up-to-date info.
This function wraps a parent context, and returns a new context
cancelled when the connection is closed. This will make it so
operations started from downstreamConn.handleMessage will be
cancelled when the connection is closed.
As a bonus, the timeout now applies to the whole TLS dial
operation. Before the timeout only applied to the net dial
operation, making it possible for a bad server to stall the request
by making the TLS handshake extremely slow.
When on an unbound bouncer network downstream, we should return no
targets (there are none, because there are no upstreams at all).
When on a multi-upstream downstream, we should return no targets as we
don't support multi-upstream CHATHISTORY TARGETS.
Before this patch, we returned a misleading error message:
:example.com 403 :Missing network suffix in name
If a downstream of prefix host `foo` sends a message, the other
downstream of prefix host `bar` should receive an echo PRIVMSG with
prefix host bar.
This fixes a regression where no prefix host was sent at all.
Add support for MONITOR in single-upstream mode.
Each downstream has its own set of monitored targets. These sets
are merged together to compute the MONITOR commands to send to
upstream.
Each upstream has a set of monitored targets accepted by the server
alongside with their status (online/offline). This is used to
directly send replies to downstreams adding a target another
downstream has already added, and send MONITOR S[TATUS] replies.
Co-authored-by: delthas <delthas@dille.cc>
This has the following upsides:
- We can now routes WHO replies to the correct client, without
broadcasting them to everybody.
- We are less likely to hit server rate limits when multiple downstreams
are issuing WHO commands at the same time.
The message stores don't need to access the internal network
struct, they just need network metadata such as ID and name.
This can ease moving message stores into a separate package in the
future.
Make Network.Nick optional, default to the user's username. This
will allow adding a global setting to set the nickname in the
future, just like we have for the real name.
References: https://todo.sr.ht/~emersion/soju/110
This adds support for WHOX, without bothering about flags and mask2
because Solanum and Ergo [1] don't support it either.
The motivation is to allow clients to reliably query account names.
It's not possible to use WHOX tokens to route replies to the right
client, because RPL_ENDOFWHO doesn't contain it.
[1]: https://github.com/ergochat/ergo/pull/1184
Closes: https://todo.sr.ht/~emersion/soju/135
That's what some widely used IRC servers do for their own services
(e.g. NickServ and ChanServ). This adds an additional level of
trust to make sure BouncerServ isn't typo'ed or impersonated.
This is a mecanical change, which just lifts up the context.TODO()
calls from inside the DB implementations to the callers.
Future work involves properly wiring up the contexts when it makes
sense.
See https://ircv3.net/specs/extensions/capability-negotiation
> Upon receiving either a CAP LS or CAP REQ command during connection
> registration, the server MUST not complete registration until the
> client sends a CAP END command to indicate that capability negotiation
> has ended.
This commit should prevent soju from trying to authenticate the user
prior to having received AUTHENTICATE messages, when the client eagerly
requests capabilities with CAP REQ seeing available capabilities
beforehand with CAP LS.
This allows users to set a default realname used if the per-network
realname isn't set.
A new "user update" command is introduced and can be extended to edit
other user properties and other users in the future.
Typically done via:
/notice $<bouncer> <message>
Or, for a connection not bound to a specific network:
/notice $* <message>
The message is broadcast as BouncerServ, because that's the only
user that can be trusted to belong to the bouncer by users. Any
other prefix would conflict with the upstream network.
The first MOTD upon connection is ignored, but subsequent MOTD messages
(requested by the "MOTD" message from the client, typically using a
/motd command) are forwarded.
In multi-upstream mode, we can't relay WHO/WHOIS messages for the
current user, because we can't decide which upstream server the
message should be relayed to.
In single-upstream server, we do know which upstream server to use,
so we can just blindly relay the message.
This allows users to send a self-WHO/WHOIS to check their cloak and
other information.
Instead of ignoring detached channels wehn replaying backlog,
process them as usual and relay messages as BouncerServ NOTICEs
if necessary. Advance the delivery receipts as if the channel was
attached.
Closes: https://todo.sr.ht/~emersion/soju/98
This allows to have shorter and more future-proof IDs. This also
guarantees the IDs will only use reasonable ASCII characters (no
spaces), removing the need to encode them for PING/PONG tokens.
TL;DR: supports for casemapping, now logs are saved in
casemapped/canonical/tolower form
(eg. in the #channel directory instead of #Channel... or something)
== What is casemapping? ==
see <https://modern.ircdocs.horse/#casemapping-parameter>
== Casemapping and multi-upstream ==
Since each upstream does not necessarily use the same casemapping, and
since casemappings cannot coexist [0],
1. soju must also update the database accordingly to upstreams'
casemapping, otherwise it will end up inconsistent,
2. soju must "normalize" entity names and expose only one casemapping
that is a subset of all supported casemappings (here, ascii).
[0] On some upstreams, "emersion[m]" and "emersion{m}" refer to the same
user (upstreams that advertise rfc1459 for example), while on others
(upstreams that advertise ascii) they don't.
Once upstream's casemapping is known (default to rfc1459), entity names
in map keys are made into casemapped form, for upstreamConn,
upstreamChannel and network.
downstreamConn advertises "CASEMAPPING=ascii", and always casemap map
keys with ascii.
Some functions require the caller to casemap their argument (to avoid
needless calls to casemapping functions).
== Message forwarding and casemapping ==
downstream message handling (joins and parts basically):
When relaying entity names from downstreams to upstreams, soju uses the
upstream casemapping, in order to not get in the way of the user. This
does not brings any issue, as long as soju replies with the ascii
casemapping in mind (solves point 1.).
marshalEntity/marshalUserPrefix:
When relaying entity names from upstreams with non-ascii casemappings,
soju *partially* casemap them: it only change the case of characters
which are not ascii letters. ASCII case is thus kept intact, while
special symbols like []{} are the same every time soju sends them to
downstreams (solves point 2.).
== Casemapping changes ==
Casemapping changes are not fully supported by this patch and will
result in loss of history. This is a limitation of the protocol and
should be solved by the RENAME spec.
... and do not forward INVITEs to downstreams that do not support the
capability.
The downstream capability can be permanent because there is no way for a
client to get the list of people invited to a channel, thus no state can
be corrupted.
... so that the JOIN/history batch takes into account all capabilities.
Without this commit for example, enabling multi-prefix after the batch
makes the client send NAMES requests for all channels, which generate
needless traffic.
This uses the fields added previously to the Channel struct to implement
the actual detaching/reattaching/relaying logic.
The `FilterDefault` values of the messages filters are currently
hardcoded.
The values of the message filters are not currently user-settable.
This introduces a new user event, eventChannelDetach, which stores an
upstreamConn (which might become invalid at the time of processing), and
a channel name, used for auto-detaching. Every time the channel detach
timer is refreshed (by receveing a message, etc.), a new timer is
created on the upstreamChannel, which will dispatch this event after the
duration (and discards the previous timer, if any).
This commit prevents downstream from sending those commands:
- NICK BouncerServ
- NICK BouncerServ/<network>
The later is necessary because soju would otherwise save the nick change
and, in the event that the downstream connects in single-upstream mode
to <network>, it will end up with the nickname "BouncerServ".
This patch implements basic message delivery receipts via PING and PONG.
When a PRIVMSG or NOTICE message is sent, a PING message with a token is
also sent. The history cursor isn't immediately advanced, instead the
bouncer will wait for a PONG message before doing so.
Self-messages trigger a PING for simplicity's sake. We can't immediately
advance the history cursor in this case, because a prior message might
still have an outstanding PING.
Future work may include optimizations such as removing the need to send
a PING after a self-message, or groupping multiple PING messages
together.
Closes: https://todo.sr.ht/~emersion/soju/11
TAGMSG are (in current specs and drafts from IRCv3) only used for
client tags. These are optional information by design (since they are
not distributed to all users), therefore it is preferable to discard
them accordingly to upstream, instead of waiting for all upstreams to
support the capability to advertise it.
Introduce a messageStore type, which will allow for multiple
implementations (e.g. in the DB or in-memory instead of on-disk).
The message store is per-user so that we don't need to deal with locking
and it's easier to implement per-user limits.
This simple implementation only advertises extended-join to downstreams
when all upstreams support it.
In the future, it could be modified so that soju buffers incoming
upstream JOINs, sends a WHO, waits for the reply, and sends an extended
join to the downstream; so that soju could advertise that capability
even when some or all upstreams do not support it. This is not the case
in this commit.
This panic happens when sending history to a multi-upstream client.
sendNetworkHistory is called on each network, but dc.network is nil.
Closes: https://todo.sr.ht/~emersion/soju/93
Instead, always read chat history from logs. Unify the implicit chat
history (pushing history to clients) and explicit chat history
(via the CHATHISTORY command).
Instead of keeping track of ring buffer cursors for each client, use
message IDs.
If necessary, the ring buffer could be re-introduced behind a
common MessageStore interface (could be useful when on-disk logs are
disabled).
References: https://todo.sr.ht/~emersion/soju/80
Keep the ring buffer alive even if all clients are connected. Keep the
ID of the latest delivered message even for online clients.
As-is, this is a net downgrade: memory usage increases because ring
buffers aren't free'd anymore. However upcoming commits will replace the
ring buffer with log files. This change makes reading from log files
easier.
soju saved most NickServ messages[0] as credentials because of a missing
`default` clause in the check of the NickServ command.
[0] messages that had at least a command and two other parameters
WebSocket connections allow web-based clients to connect to IRC. This
commit implements the WebSocket sub-protocol as specified by the pending
IRCv3 proposal [1].
WebSocket listeners can now be set up via a "wss" protocol in the
`listen` directive. The new `http-origin` directive allows the CORS
allowed origins to be configured.
[1]: https://github.com/ircv3/ircv3-specifications/pull/342