proc
, you should use
proc_doc
. This results in the
procedure-by-procedure documentation. If you're a new programmer,
you might want to read the common errors
list.
You call proc_doc
with a string in between the args and the body:
proc_doc plus2 {x} "returns the result of adding 2 to its argument" { return [expr $x + 2] }
doproc bboard_users_can_add_topics_p {} { return 0 }
[ns/server/photonet/acs/bboard] ; can a user start a new bboard UserCanAddTopicsP=0 ... proc bboard_users_can_add_topics_p {} { return [ad_parameter UserCanAddTopicsP bboard] }
If your define a Tcl procedure that is site-specific, name it with a prefix that is site-specific. E.g., the EDF scorecard.org site uses "score_". A site to sell khakis uses "k_".
If it is a community procedure, name it ad_ and put in somewhere in the /tcl directory.
If it is a utility procedure, name it util_ and put it in the /home/nsadmin/modules/tcl/utilities.tcl file.
users
table. Try not to be
redundant with the directory name. So if you have a bunch of scripts in
a directory called "users", the script to look at one user would just be
"one.tcl" rather than "user.tcl".
ad_footer
or
gc_footer
or calendar_footer
, etc. Admin
pages should end with ad_admin_footer
. Then if the
webmaster encounters a bug or a page that doesn't do what is needed, he
or she can complain to a programmer.
If we're building a system where we can't get any better theories from the publisher, we design pages to have the following structure:
How? Follow two principles:
In a photography classifieds page, don't show categories that haven't
any current ads (no dead links) and count up the ads in each category
for display next to the link (as much info as possible). Isn't this
GROUP BY that sequentially scans the classifieds table kind of expensive
for a top-level page on a non-commercial site? Sure. But the solution
is to use Memoize_for_Awhile
to cache results in virtual
memory.
Computer time is cheap; user time is precious. Work the server hard on behalf of each and every user. Support the user with personalization. Find out what is going to be down a hyperlink before offering it to the user. Buy extra processors as the community grows.
if {[ad_read_only_p]} { ad_return_read_only_maintenance_message return }
Systems should be designed so that a user clicking submit twice will not result in a duplicate database entry. The fix is that we generate the unique primary key in the form or the approval page (better since it will still work if the user reuses the form). See the ecommerce chapter of my book for a discussion of how this works. See the news subsystem for a simple implementation example.
Systems should be designed so that they do something sensible with plain text and HTML. Add an "html_p" column to any table that accepts user input. Store the user input in unadulterated form in the database. Convert it to HTML on the fly if necessary when displaying (this consists of guessing where to stick in <P> tags and quoting greater-than or less-than signs). See the news subsystem for an example.
Our convention in the ACS is to present the existing items in a list (UL). Then we have a blank line (P tag). Then we have a new list item (LI) with a hyperlinked phrase like "add new item".
Note that it points to other tables via the on_what_id and on_which_table columns.create table general_comments ( comment_id integer primary key, on_what_id integer not null, on_which_table varchar(50), user_id not null references users, comment_date date, ip_address varchar(50) not null, modified_date date, content clob, approved_p char(1) default 't' check(approved_p in ('t','f')) );
users_alertable
view instead of the users
table as a selection pool for generating alerts:
create or replace view users_alertable as select * from users where (on_vacation_until is null or on_vacation_until < sysdate) and (deleted_p is null or deleted_p = 'f') and (email_bouncing_p is null or email_bouncing_p = 'f');
Auditing a table consists of:
The ACS has a number of auditing conventions which you should follow, as well as some utility procedures which can be used to display the history of all states a table (or set of tables) has been in. This is documented in the Audit Trail Package.
The two places on photo.net where there are decorations like this are up in the headline (turning it into an HTML table) and also alongside lists of stuff. Procedures that support this are the following:
ad_decorate_side
(in /tcl/ad-sidegraphics.tcl)
ad_decorate_top
(in /tcl/ad-defs.tcl)
Remember Alan Cooper's adage that "No matter how cool your user interface, it would be better if there were less of it."
We applied this principle on jobdirect.com by suppressing the categorization machinery until the employer-user had picked at least 8 students. Categorization then appeared as an option when the user was viewing his or her list of favorite students (presumably this is the only time when the user might have been thinking "hey, this list is getting long"). Once the user had elected to switch over to the more complex categorization interface, future picks of favorite students would result in messages like "Oh, into which folder would you like us to put this resume?"
For the advanced user, given that you're going to have categorization you might ask how much is needed. Users are familiar with the hierarchical directory structures in the Windows and Macintosh file systems. Or are they? Hierarchical file systems were lifted from the operating systems of the 1960s and pushed directly into consumer's laps without anyone asking the question "Are desktop users in fact able to make effective use of this interface directly?" The programmers who built file systems needed an O(log n) retrieval method for files. A tree data structure yields O(log n) retrieval, so a file system has an underlying hierarchical structure. The programmers were too lazy to develop any kind of categorization or database scheme on top of the hierarchical tree so they just exposed the tree structure directly to users. So let's not invest too much authority in tree-structured file systems.
Even if they have painfully learned to manage a hierarchy of files on their desktop, do users want to manage another hierarchy on each Web service that they use?
Do we need elaborate hierarchies? Consider the user who has 1000 items to manage but is very likely to want to work on the 20 selected or uploaded in the last month. Does this user need to wade through 1000 listings to find the 20 most recent? No, not if we provide a "sort by most recent" option. Then the user can simply look at the top of the page and not scroll down too much.
Can we survive with only one level of hierarchy? I think so. Especially if
User typing SQL queries?!?!? Am I insane? How could a random Web surfer be expected to master the profundities of SQL syntax?
Thus the average Web developer will typically build an HTML form to shield the user from the complexity of SQL while retaining all the power of SQL. This form will have one input for every column in the table, perhaps with some ability for a user to specify operators (e.g., "less than", "equal to", "starting with"). The form will have a select box or radio button set where the user can decide whether he wants to AND or OR the criteria.
This approach shields the user from the trivial syntactic complexity of SQL but directly exposes the far more brain-numbing semantic complexities of SQL in general and the publisher's data model in particular.
Bottom Line Principle 1: the first search form that your user sees ought to be a single text entry box, just like AltaVista's. The results page can explain how the results were obtained and perhaps offer a link to an advanced search form at the bottom (on the presumption that the user has scanned all the results and found them inadequate).
Let's now consider the case of the user who fills out a multi-input search form or types a long phrase into a text search box. I.e., the user has given the server lots of information about his or her interests. What is this user's reward? Generally fewer results than would be delivered to a user who only provided one query word or filled in one field in the moby search form. Compare this to AltaVista, Lycos, and other full-text search systems that people use every day. The more words a user gives a public search engine, the more results are returned (though oftentimes only the first 20 or 30 are displayed).
Bottom Line Principle 2: the more information a user gives to your server the more results your server should offer to the user.
This principle seems dangerous in practice. What if the user types so many words that essentially every item in the database is a match? Wouldn't it be better to offer an advanced search form that lets the user limit results explicitly.
Very seldom. Users are terrible at formulating boolean queries. Most often, they'll come up with a query that matches every row in your database or a query that matches none. You really shouldn't engineer software so that it is possible for the server to return a page saying "Your query returned zero results."
What's the way out? Suppose that you could score every row in the database again the user's criteria. It would then be perfectly acceptable to return every row in the database, ranked by descending score. The user need only look at the top of the page and may ignore the less relevant results.
Is this a radical idea? Hardly. All the public search engines use it. They may return tens of thousands of results if a user supplies a long query string but the most relevant ones are printed first.
Bottom Line Principle 3: Scoring and ranking and returning the top scoring items is a much better user interface than forcing the user into a simplistic binary in/out.Suppose that your users are giving you criteria that are more structured than free text. What's a good user interface? On the search form, ask for preferences but provide checkboxes to "absolutely exclude items that don't meet this criterium". On the results page, print items as follows:
Items that meet all your criteria
- 98: foobar
- 92: yow
...Items that meet some of your criteria
- 83: blatzco
- 83: bard
- 82: cookie monster
...
Here are some warning signs that you need to get help from a real SQL programmer:
lock table
set timing on
and
set autotrace trace
and find that some of your queries are
taking more than a fraction of a second and/or requiring full table
scans. Online systems should try to get everything done within 1/10th
of a second. Remember that if your page takes 1/10th of a second you
can only serve 10 pages/second per processor.
ns_share
construct is very slow in the Tcl
8.2 version of AOLserver. We recommend the use of the much more
powerful To provide this extra flexibility, ACS actually registers a single "über-filter" with AOLserver and handles filtering itself (in ad_handle_filter).
You can use Perl to change all your legacy code to use ad_register_filter:
perl -pi -e 's/ns_register_filter/ad_register_filter/g' files-to-process...
You can use Perl to change all your legacy code to use ad_schedule_proc:
This adds the necessary t after the -thread or -once flag (e.g., converts ns_schedule_proc -once to ad_schedule_proc -once t).perl -pi -e 's/ns_schedule_proc( -\w+)?/"ad_schedule_proc$1".($1 ? " t" : "")/eg;'` files-to-process...