• last updated 13 hours ago
Constraints
Constraints: committers
 
Constraints: files
Constraints: dates
Cleanup leftover line

Extend automated tests to cover new ::xo::dc multirow api

Provide an ::xo::dc api to generate multirows

Notable differences with the classical db_multirow:

- a multirow will always be appended when it already exists. The constraint that the two multirows must have the same columns remains.

- no "if_no_rows_code_block"

- no unclobber

- no subst, do it yourself :-)

- no cache stuff

- support for prepared statements

The remaining behavior has been kept the same, e.g. variables will always be reset to empty string, even if they existed outside of the code block. Compatibility has been checked with knowns idiosyncrasies.

show the question_count in the title only while filling in the exam

use message key

    • -1
    • +1
    /openacs-4/packages/xowf/lib/inclass-exam.wf
ensure variable results is defined

Fix prepared statement syntax

Remove test of undocumented format to specify prepared statement

show composite subquestions in question_overview_block

    • -32
    • +47
    /openacs-4/packages/xowf/tcl/test-item-procs.tcl
Provide, as for other interfaces, a Postgres implementation of database foreach that will support prepared statements won't just wrap the db_* api

Make sure the original SQL stays unchanged, as it is used e.g. in the nsv storing the statement and in log messages

Basic test of xo::dc foreach when using prepared statements

This api currently supports the flag, but will ignore it

Strip the possible validation constraint after the first colon character when building the cache key for a parameter, so that the value is stored correctly regardless of the format used to query the parameter

Fixes xowiki.xowiki_test_cases automated test

    • -1
    • +7
    /openacs-4/packages/xowiki/tcl/package-procs.tcl
Improve the approach with strings containing colons in prepared SQL statements:

we first normalize all strings with a safe placeholder, substitute the variables, then put the strings back in place.

Extend test for prepared statements containing strings with colon characters, exposing that the latest commit won't address all cases

Make test more consistent

Improve the regexp detecting variables in a prepared statement, so that a prepared variable must not be preceded by a semicolon (as before), but also by any character allowed for a variable name

Introduce a test exposing that when a statement is prepared on SQL containing colons, this sould fail because they would be interpreted as variables

Reduce logging as we do downstream

Refactor the query in the folder-chunk page so that on postgres one can enforce permissions in bulk, rather than for each file

Allow to prepare a statement with no parameters (See https://www.postgresql.org/docs/11/sql-prepare.html)

Basic test of the prepared statements feature on ::xo::dc api

Fix typo

    • -2
    • +2
    /openacs-4/packages/xowiki/tcl/xowiki-procs.tcl
Prefer xo::dc api

    • -2
    • +2
    /openacs-4/packages/xowiki/tcl/xowiki-procs.tcl
Fix typo

Reuse the canvas objects throughout the proctoring

make naming more consistent

differentiate between sent and received intra-server messages

added multiple delivery methods to intra-server talk

Here is some background information for my experiments with the delivery methods.

For this experiment, I compared 5 different means for this kind of

communications

- ns_http over HTTP (the standard setup, which is used in OpenACS 5.10)

- ns_http over HTTPS

- ns_conn over HTTP using persistent connections

- ns_conn over HTTPS using persistent connections

- ns_udp using UDP

 

I tested the is in 2-node cluster to make measurements simple consisting

of the canonical server and one node listening on the following protocols/ports:

- http://127.0.0.1:8101

- https://127.0.0.1:8444

- udp://127.0.0.1:8101

The first test sends per call 1000 intra-server commands from the canonical server

to the 2nd node over the various delivery methods:

set times 1000

lappend _ ns_http-[time {::acs::CS_127.0.0.1_8101 message set x ns_http} $times]

lappend _ ns_https-[time {::acs::CS_127.0.0.1_8444 message set x ns_https} $times]

lappend _ ns_connchan-http-[time {::acs::CS_127.0.0.1_8101 message -delivery connchan set x ns_http} $times]

lappend _ ns_connchan-https-[time {::acs::CS_127.0.0.1_8444 message -delivery connchan set x ns_https} $times]

lappend _ ns_udp-[time {::acs::CS_127.0.0.1_8101 message -delivery udp set x udp} $times]

join $_ \n

This leads to the following results:

ns_http 564.027083 microseconds per iteration

ns_https 1483.478916 microseconds per iteration

ns_connchan-http 147.688541 microseconds per iteration

ns_connchan-https 68.480875 microseconds per iteration

ns_udp 198.343416 microseconds per iteration

Since the commands are sent in sequence, the variant with the

persistent HTTP connection is the fastest, although this is Tcl

implemented. The slowest is the version with HTTPS via ns_http without

persistent connections. We see a factor of 20 in terms of performance.

When using ns_udp with the "-noreply" option, we have would have

a "fire and forget" solution, which might be ok when the packet loss

rate is low. That would lead to 54 microseconds.

Clearly, the numbers for persistent connections look the best, but it has

as well some disadvantages compared to other solutions:

- the server has to keep a socket open to every node (but no

connection thread)

- the keepalive setting of the server must set sufficiently long to

gain advantage of persistent connections (e.g. 5 sec keepalive,

heart beat frequency of 1s)

- Since the whole communication goes over a single connection, it is

necessary to serialize the requests to avoid that multiple

connection threads write concurrently to the same connection and

interfere with each other

- It is probably necessary to have a separate thread handling the

outgoing intra-server talk (implementing cmd queuing,

async-handling, heart-beat, etc.). Since this has to be a Tcl-thread

it will use up some memory (similar to a connection thread).

- This intra-server talk thread requires queuing and event handling we

have so far just in xotcl-core, so when implemented, it will require

the xotcl-core package (maybe this can be put later to acs-core).

As a second experiment, I've implemented a simple heart-beat service

inside the request monitor that checks the liveliness of the nodes

every second. So, in contrary to the back to back commands of the

first experiment, these are single calls. Here are some random

values for the 5 delivery methods:

[27/Dec/2022:20:29:34.171376][::throttle] Notice: -cluster: http://127.0.0.1:8101 set x ns_http sent total 2.907ms

[27/Dec/2022:20:29:34.182241][::throttle] Notice: -cluster: https://127.0.0.1:8444 set x ns_https sent total 10.798ms

[27/Dec/2022:20:29:34.183475][::throttle] Notice: -cluster: http://127.0.0.1:8101 set x ns_connchan sent total 1.161m

[27/Dec/2022:20:29:34.183657][::throttle] Notice: -cluster: https://127.0.0.1:8444 set x https-connchan sent total 0.086ms

[27/Dec/2022:20:29:34.188564][::throttle] Notice: -cluster: udp://127.0.0.1:8101 set x udp sent total 4.861ms

[27/Dec/2022:20:30:25.494080][::throttle] Notice: -cluster: http://127.0.0.1:8101 set x ns_http sent total 2.049ms

[27/Dec/2022:20:30:25.516306][::throttle] Notice: -cluster: https://127.0.0.1:8444 set x ns_https sent total 21.903ms

[27/Dec/2022:20:30:25.517239][::throttle] Notice: -cluster: http://127.0.0.1:8101 set x ns_connchan sent total 0.814ms

[27/Dec/2022:20:30:25.522957][::throttle] Notice: -cluster: https://127.0.0.1:8444 set x https-connchan sent total 0.33ms

[27/Dec/2022:20:30:25.534274][::throttle] Notice: -cluster: udp://127.0.0.1:8101 set x udp sent total 11.099ms

[27/Dec/2022:20:31:54.993455][::throttle] Notice: -cluster: http://127.0.0.1:8101 set x ns_http sent total 2.431ms

[27/Dec/2022:20:31:55.003036][::throttle] Notice: -cluster: https://127.0.0.1:8444 set x ns_https sent total 9.499ms

[27/Dec/2022:20:31:55.010100][::throttle] Notice: -cluster: http://127.0.0.1:8101 set x ns_connchan sent total 6.981ms

[27/Dec/2022:20:31:55.010585][::throttle] Notice: -cluster: https://127.0.0.1:8444 set x https-connchan sent total 0.322ms

[27/Dec/2022:20:31:55.017764][::throttle] Notice: -cluster: udp://127.0.0.1:8101 set x udp sent total 7.13ms

We see in essence the same pattern. The approach with the persistent

connections looks here the best as well. It is not clear to me, why

HTTPS over connchan is the best, but the communication seems ok. Maybe

some buffering/nagle algorithm is responsible for this. We see as well

that the round-trip takes typically single to double-digit

milliseconds. So when a single HTTP request to nsd triggers multiple

cache-flush operations to multiple nodes, this will take some

time. When e.g., the request issues 5 cash-flush operations, which are

sent to 5 nodes, and every request with take 1ms, the cache flushing

will make the original request about 25ms slower. This might also be

an argument for a separate thread doing these operations

asynchronously.

    • -1
    • +12
    /openacs-4/packages/acs-tcl/tcl/cluster-init.tcl
    • -30
    • +230
    /openacs-4/packages/acs-tcl/tcl/cluster-procs.tcl
improved clusterwide operations

    • -6
    • +30
    /openacs-4/packages/acs-tcl/tcl/memoize-procs.tcl