LM_NET: Library Media Networking

Previous by DateNext by Date Date Index
Previous by ThreadNext by Thread Thread Index
LM_NET Archive



Date:  April 1, 1994

FYI


---------- Forwarded message ----------
Date: Fri, 1 Apr 1994 15:22:19 -0800
From: Gleason Sackman <sackman@plains.nodak.edu>
To: Multiple recipients of list <net-happenings@is.internic.net>
Subject: the NeuroGopher high-performance Gopher server (fwd)

Forwarded by Gleason Sackman - InterNIC net-happenings moderator
()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()()

---------- Text of forwarded message ----------
Date: Fri Apr 01 13:10:38 1994
From: Angela Seraphim <angela@pink.com>
To: gopher-news@boombox.micro.umn.edu
Subject: the NeuroGopher high-performance Gopher server

As Gopher usage continues to grow, server administrators are
confronted  with the problem of supporting increasing numbers of
users without buying faster hardware. Even though Gopher
administrators do not insist on sending gratuitous bitmaps with
each menu, there is a real need for faster server technology to
handle the increasing user base.

A new Gopher server implementation (NeuroGopher) allows for 2 - 10
times the throughput and capacity of conventional Unix gopher
servers without purchasing faster hardware. NeuroGopher borrows
ideas from neural networking adaptive learning technology to
achieve significant throughput enhancements.

By analyzing the Gopher server logs, it is possible to determine
the most popular paths through the gopher server's menus. Depending
on the server's  menu and user profile, serveral empirical studies
have found that  between 60 and 90% of the clients follow one of a
small number of paths through the server's menus. Of course, usage
patterns and the menu structure can change so it is crucial to have
an adaptive/dynamic map of likely user paths through the
information space, so that performance does not decay over time and
adapts to the changing intelligence level of the server
administrator and user community.

By adaptively predicting the item which the user will next request,
the NeuroGopher server can practice speculative serving (analogous
to speculative execution in superscalar RISC CPU architectures).
Predictive analysis of server logs makes it possible to pipeline
processing of user requests and this yields  part of NeuroGopher's
performance enhancement. However, predicting what the client will
ask for before the client makes the request wouldn't yield
substantial performance gains without either rewriting client
software or properly estimating when to send a response to the
user's request. NeuroGopher estimates both the roundtrip transit
time between the server and client and how long the user at the
keyboard will pause between queries. Given this information,
NeuroGopher can properly time when to send the response. In other
words, NeuroGopher continuously estimates the total transit time
and so knows how fast the user requests information. This makes it
possible to transmit appropriate answers to the questions that the
client has not yet asked, yet  have the answers arrive immediately
after the client actually makes the request (rather than before).

NeuroGopher's SPOUT mode (Synchronous Processing ignoring Ordinary
User  Transactions) takes advanatge of the law of large numbers;
since most people  are asking for the same thing, it is is possible
to completely ignore 95% of  the requests and SPOUT the answer at
clients before they ask.

NeuroGopher is available via anonymous ftp from
pink.cloud.heaven.com and is free to commercial sites. Non-profit
and educational sites may license NeuroGopher; inquiries should be
sent to god@pink.cloud.heaven.com.


LM_NET Archive Home