David Cottingham : Research Bibliography
Copyright © 2009 David Cottingham. All Rights Reserved.
Automatically generated from a bibfile
If you notice any errors or broken links, please let me know:
david.cottingham.nospam@cl.cam.ac.uk


Topics:
Sensor Networks in Transportation (Inter-)Vehicular Networks Satellite Positioning
Cellular Positioning Road Pricing Ad-Hoc Networks
Pricing & Incentives Mobile IPv6 & 4G Cellular Networks Network Coverage Mapping
Radio Engineering RF-based Positioning Spatial Databases
Filtering & Smoothing Miscellaneous Graph Theory
Misc. Security


Sensor Networks in Transportation

H. Qi, X. Wang and S. Iyengar and K. Chakrabarty. Multisensor Data Fusion in Distributed Sensor Networks Using Mobile Agents. 2001. in Proc. of the 4th Int. Conf. Information Fusion.
Describes how to perform sensor data processing by passing mobile agent code between sensors, rather than large quantities of data back to a central processing point. Uses a Multi-Resolution Integration algorithm, which analyses the overlap function of data from sensors that are in a cluster (i.e. should be giving the same readings) and picks the peak that is both the highest and the widest in the graph. This is done at progressively finer resolution (as much as is demanded by the original query) to obtain an answer. The mobile agent carries with it this partially aggregated data, to the next node, where the answer becomes more accurate. If the desired accuracy is achieved prior to the MA having visited all the nodes in its itinerary it may return to base, saving energy in transmission and processing.

T. Clouqueur and P. Ramanathan and K. Saluja and K.-C. Wang. Value-Fusion Versus Decision-Fusion for Fault-tolerance in Collaborative Target Detection in Sensor Networks. 2001. in Proc. of the 4th Int. Conf. Information Fusion.
Comparison between two methods of collaborative decision making in sensor networks. In value diffusion each sensor distributes its measured value, and each sensor then makes a decision based on the average of all the received values. In decision diffusion each sensor compares its own value to a threshold, and distributes a boolean, thus all sensors simply take a majority vote. Simulation shows that value diffusion is superior when the sensor network is highly reliable and fault free. With faulty sensors, decision diffusion's performance degrades at a lesser rate.

K. Sivalingam. Wireless Sensor Networks. 2004. IEEE VTS News, 51(3):9--16.
An overview of WSNs, good summary of cluster model of sensor networks, directed diffusion, security, power usage, and location systems.

L. Lamport and R. Shostak and M. Pease. The Byzantine Generals Problem. 1995. in Advances in Ultra-Dependable Distributed Systems. IEEE Computer Society Press.
Overview of the problem reliable communication in distributed systems. With unsigned messages, the result that no solution can exist with fewer than $3m+1$ generals if there are $m$ traitors present in a fully connected graph is proven. With signed messages, in a fully connected graph we may have arbitrary numbers of traitors. With a $p$-regular graph (i.e. all nodes have $p$ distinct neighbours) the usage of unsigned messages implies that for $p>=3m$ the problem can be solved, whilst with signed messages we require only that the subgraph of loyal generals to be connected to achieve reliable communication by majority voting.

P. Vidales and F. Stajano. The Sentient Car: Context-Aware Automotive Telematics. 2002. in Proc. 1st European Workshop on Location Based Services.
Overview of the sentient car project, detailing the basic configuration, and the uses that the location and sensor data was put to, including a pollution map.

D. Krajzewicz and G. Hertkorn and C. R"ossel and P. Wagner. SUMO (Simulation of Urban MObility); An open-source traffic simulation. 2002. in Proc. of the 4th Middle East Symposium on Simulation and Modelling, pages 183--187. SCS European Publishing House.
A traffic simulator used for modelling arbitrary road topologies, taking into account junction types, traffic lights, speed limits, etc.. Topologies are stored in XML, and the simulator has a useful GUI. The code is open source C++.

J.-P. Hubaux and S. vCapkun and J. Luo. The Security and Privacy of Smart Vehicles. 2004. IEEE Security and Privacy, 2(3):49--55.
Brief overview of the benefits of intelligent transportation systems. Points out the need for careful marketing of technologies to the public to ensure that they are not viewed as a privacy intrusion. Location data is also reckoned to be key in ITSs, and the article mentions muliteration and its security. Finally, it touches on token passing for controlling vehicles' accesses to busy junctions or slip roads.

W. Li and H. Leung. Simultaneous Registration and Fusion of Multiple Dissimilar Sensors for Cooperative Driving. 2004. IEEE Transactions on Intelligent Transportation Systems, 5(2):84--98.
An algorithm based on an Unconstrained Kalman Filter is presented, to combine data from multiple sensors (DGPS, intertial, radar, camera...) to ascertain the relative positions of two vehicles for platoon driving. Without regular calibration, usage of any one sensor results in innacurate estimates. Instead, a UKF is used to combine the data, recalibrate sensor biases based on readings for the others, and obtain a more accurate fix. The authors note that if the filter is constrained by using digital map data, the accuracy improves sufficiently for platoon driving.

Maziar Nekovee. Sensor Networks on the Road: The Promises and Challenges of Vehicular Ad Hoc Networks and Grids. 2005. in Workshop on Ubiquitous Computing and e-Research.
Overview of possible applications of VANETs, and a brief description of some of the difficulties that may be encountered. Also talks about the possibilities of vehicular grids, where processing tasks are distributed over multiple vehicles. This presents challenges due to the high churn rate in vehicular networks.

AMI-C. AMI-C Use Cases. 2003. 1001, AMI-C.
Details over 100 use cases for telematics applications in vehicles, particularly multimedia ones.

Dieter Fox and Jeffrey Hightower and Lin Liao and Dirk Schulz and Gaetano Borriello. Bayesian Filtering for Location Estimation. 2003. IEEE Pervasive Computing, 2(3):24--33.
Excellent explanation of the different types of Bayesian filter. Bayesian estimation can be used to calculate probability distributions of the expected value of a reading (e.g. position), by using a movement model where the probability of the next position given the current position (both distributions) is known. By multiplying this quantity, p(x1|x0), by our belief about the position at that point in time, and integrating over all possible positions, we obtain a new probability distribution describing our new belief as to where the object is. We then correct that belief using any sensor observations, to obtain a corrected belief distribution that indicates an estimate of the position. Kalman filters are suitable for sensors that do not necessarily have a high data rate, but are highly accurate. They rely on the belief distribution being Gaussian (and therefore unimodal), and are used when the data are continuous. Multihypothesis tracking overcomes the unimodal constraint, by combining multiple Kalman filters together. Grid based approaches are much more resource intensive as they maintain a value for each possible discrete location that indicates the belief that the object is in that cell. Topological approaches avoid such complexity by designating certain nodes on a graph, each with a belief probability, with edges representing possible paths in physical space. This can be used to coarsely represent indoor environments. Particle filters use discrete samples; we begin with all position samples having equal probability (weight) of being true. When a sensor reading is taken, we multiply the sample weights by the probability distribution p(z|x) (i.e. the probability that the sensor reading z was obtained at each position x). This then means that samples at the more likely positions have higher weights. We then randomly pick samples from the distribution, with probability of pick proportional to the sample weight. This then means there is high sample density where p(z|x) was high. We then repeat the process, and, knowing the movement model, are able to predict the new position far more accurately than at the start (essentially, we perform the random pick of samples after re-weighting them, and then we shift the graph along the x axis depending on our belief of how the object will move by the time the next sensor reading is taken). The paper goes on to mention how the authors have used such techniques to combine readings from ultrasonic, infrared, and laser range finders to ascertain people's positions, and also mentions how anonymous location techniques (where a person can be tracked whilst not knowing their identity) can be supplemented by less frequent identity-aware sensors which can enable tracking of users.

James Tate. A novel research tool -- presenting the highly instrumented car. 2005. Traffic Engineering & Control, 46(7):262--265.
Describes the Leeds instrumented car (mostly environmental sensors, though also some thoughts on maintaining distance between car ahead.)

V. Bychkovsky and K. Chen and M. Goraczko and H. Hu and B. Hull and A. Miu and E. Shih and Y. Zhang and H. Balakrishnan and S. Madden. Data management in the CarTel mobile sensor computing system. 2006. in Proc. ACM SIGMOD, pages 730--732.
Describes the CarTel architecture for the collection of data from sensors deployed on cars, by means of a delay tolerant networking architecture. Vehicles are data mules that make use of WiFi and Bluetooth connectivity to upload data back to base. An SQL-like syntax can be used to distribute queries to all nodes, or a subset of them, in the architecture, including parameters specifying the particular data set required and the rate of readings sent back.

Bret Hull and Vladimir Bychkovsky and Yang Zhang and Kevin Chen and Michel Goraczko and Allen K. Miu and Eugene Shih and Hari Balakrishnan and Samuel Madden. CarTel: A Distributed Mobile Sensor Computing System. 2006. in Proc. ACM SenSys.
Longer description of the CarTel architecture, including CafNet, and the distributed query interface. The analysis portal is also detailed, and further work. The sensors deployed are limited to OBD and GPS devices, along with various networking capabilities. The main direction of research appears to be towards traffic monitoring and routing applications, along with some WiFi deployment observations and DTN-related research.

Jakob Eriksson and Lewis Girod and Bret Hull and Ryan Newton and Samuel Madden and Hari Balakrishnan. The pothole patrol: using a mobile sensor network for road surface monitoring. 2008. in Proc. ACM MobiSys, pages 29--39.
A system deployed on 7 taxis in the Boston, USA area to record the locations of potholes. The authors describe the algorithms used to filter the accelerometer data obtained, and decide whether the events are due to potholes, expansion joints, railroad crossings, or other road features. Clustering is used to obtain those locations where there are multiple reports of a pothole being present. Machine learning is used with labelled training data to construct the recogniser. For those detections with 4 or more sightings, 90 p.c. were true potholes.

Jonathan J. Davies and David N. Cottingham and Brian D. Jones. A Sensor Platform for Sentient Transportation Research. 2006. in Proc. 1st European Conference on Smart Sensing and Context, pages 226--229.
This paper describes experiences and lessons learnt in the creation of a vehicle-based sensor platform, as part of research into sentient computing. We outline the requirements of such a platform; the sensors that have been deployed in it; the on-board infrastructure to facilitate logging and the deployment of context-aware applications; and the external communications provision. We focus particularly on the lessons learnt in the deployment, justifying and evaluating our approach. This analysis will be of use to others considering similar sensor platform deployments. We conclude with an outline of our ongoing research.

David N. Cottingham and Jonathan J. Davies and Brian D. Jones. A Research Platform for Sentient Transport . 2006. IEEE Pervasive Computing, 5(4):63--64.
Describes the sentient vehicles project at the Computer Laboratory.

Jonathan J. Davies and Alastair R. Beresford and Andy Hopper. Scalable, Distributed, Real-Time Map Generation. 2006. IEEE Pervasive Computing, 5(4):47--54.

Marios D. Dikaiakos and Saif Iqbal and Tamer Nadeem and Liviu Iftode. VITP: An Information Transfer Protocol for Vehicular Computing. 2005. in Proc. ACM VANET.
Describes a protocol to run on top of vehicular ad-hoc networks, where groups of cars form Virtual Ad-Hoc Servers (VAHS). These are then used to provide results for particular queries, issued in the VITP format. Queries have return conditions, specifying when the query has been completed. Vehicles deliver query packets to each other by means of wireless networking. The authors provide a simulation study based on ns-2 and a simple traffic model, in order to evaluate their protocol, examining the effects of distance and vehicle density.

John Burgess and Brian Gallagher and David Jensen and Brian Neil Levine. MaxProp: Routing for Vehicle-Based Disruption-Tolerant Networks. 2006. in Proc. IEEE INFOCOM.
Describes the DTN implemented over the University of Massachusetts shuttle bus network (30 buses). Each bus has an onboard computer with a WiFi card. Data destined for other buses and the bus garage is passed over the DTN. Various optimisations to the DTN routing are described, including maintaining a probabilistic list of which buses a particular bus comes into contact with most frequently. These lists are exchanged between buses, so that a topology map with link costs can be built up. Also, receipt notifications are flooded through the network to flush copies of the data. Messages are also prioritised according to number of hops already completed and delivery likelihood (how probable a route to the destination is). The authors have also simulated how the protocol performs under different scenarios (e.g. changing the radio range) using real traces. On average the period of connection is of the order of several seconds.

Thirunavukkarasu Sivaharan and Gordon Blair and Adrian Friday and Maomao Wu and Hector Duran-Limon and Paul Okanda and Carl-Fredrik Sørensen. Cooperating Sentient Vehicles for Next Generation Automobiles. 2004. in Proc. ACM WAMES.
Describes the construction of a testbed for sentient vehicles, where these are model cars equipped with GPS, ultrasonic position sensors, electronic compass, 802.11b WiFi and a Pocket PC running Windows CE. The main focus is on the development of a middleware for MANETs, involving sensor fusion and intervehicular communication. No results are presented.

Peter Day and Jianping Wu and Neil Poulton. Beyond Real Time. 2006. ITS International, 12(6):55--56.
Overview of the uses of floating car data. This is collected either from specially equipped probe vehicles with GPS receivers, or using more coarse location data from the cellular network. Using taxis can sometimes give a distorted view of congestion, as they may have dedicated lanes. Examples of European FCD projects are given.

N. Karlsson. Floating Car Data Deployment & Traffic Advisory Services. 2003. in Bridging the European ITS Business Cooperation with China.
Describes a 220 vehicle deployment of sensors to measure traffic speeds in the city of Gothenburg. Readings uploaded over GSM GPRS. Out of a total fleet of 500,000 cars, the deployment provided good quality data concerning congestion: as good as cmaera/loop-based systems were already doing.

A. Demers and G. F. List and A. Wallace and E. E. Lee and J. M. Wojtowicz. Probes As Path Seekers: A New Paradigm. 2006. Journal of the Transportation Research Board, 1944:107--114.
Details a project where 200 vehicles were equipped with a pocket PC with 3G cellular connectivity and a GPS receiver. Each vehicle acts as a traffic speed probe, reporting this a central server every 30 seconds. These readings are aggregated and passed back to all users. The routing software on the PDA then re-calculates the route accordingly, based on the traffic information.

Kai-Uwe Thiessenhusen. Implications on City Traffic from Floating Car Data. 2003. in Proc. DAAD Summer School Traffic and Econophysics.
Describes DLR's (Germany) programme to use FCD from the taxi fleet. Compares this approach to loop detectors and air borne/satellite sensing. Points out that FCD has good coverage of motorways (100 p.c.) and principle roads (95 p.c.). Also notes that FCD can be used to generate digital maps. FCD also provides useful data concerning both intersections and linear portions of roads, which can show the effects of particular intersections (most likely places for congestion to occur). Disadvantage of FCD is no real information on flow or density without deriving it from data (with some assumptions).

George Bilchev. Traffimatics. 2006. British Telecommunications plc..
Overview and final report of the Traffimatics project. BT developed an in-car floating car data platform, and tested WiFi communication at Adastral Park (UK). A traffic microsimulator was also developed to test communication in various traffic scenarios. The work was then used by Nottingham Trent University in conjunction with a roadside WiFi deployment in the city. NTU developed a visualiser for the floating car data.

Murat Caliskan and Andreas Barthels and Bj"orn Scheuermann and Martin Mauve. Predicting Parking Lot Occupancy in Vehicular Ad Hoc Networks. 2007. in Proc. IEEE VTC, pages 277--281.
Uses a Markov model in order for cars to estimate the occupancy of parking lots around a city, by receiving time-stampped information through a VANET distribution mechanism and predicting from it. The authors provide a mathematical model, and proceed to simulate the situation using a realistic road network and car park distribution. The wireless aspects of the simulation are, however, not comprehensively specified. The simulations coupled VISSIM and ns-2, and also utilised Matlab libraries. The predictions appear to work well for 15 minutes into the future, which is deemed sufficient for most cities where one will arrive at a car park in a lesser time.

Sandor Dornbush and Anupam Joshi. StreetSmart Traffic: Discovering and Disseminating Automobile Congestion Using VANET's. 2007. in Proc. IEEE VTC, pages 11--15.
Uses probe vehicles to estimate traffic congestion. The novel features of this system are the decentralised (rather than centralised) processing of the data by all vehicles equipped with the system, and the communications of messages only when the speed of the vehicle is above or below the expected speed, i.e. if congestion is expected from the already obtained data there is no need to notify other vehicles. Congestion is inferred by the usage of clustering of k-means. The protocol is evaluated by simulation.

Lars Wischhof and Andr'e Ebner and Hermann Rohling and Matthias Lott and R"udiger Halfmann. SOTIS -- A Self-Organizing Traffic Information System. 2003. in Proc. IEEE VTC, pages 2442--2446.
Describes how vehicles can be used as traffic congestion probes and distribute this information between each other. This involves using the UTRA TDD Ad Hoc wireless technology proposed by the Fleetnet project. Each vehicle periodically (once per second) broadcasts its position and a traffic report from its knowledge base. It receives packets from other vehicles and incorporates this information into the KB. The protocol is evaluated by simulation of a two lane, straight line, highway. It is found that local traffic information is received 50 km away within 27 minutes. This is evidently dependent on market penetration, which is assumed to be low. The authors conclude that such a system is necessary because there is no need for infrastructure on all roads. However, it should be noted that the performance of the system with low penetration rates appears approximately equal in terms of delay for local information to arrive some distance away to current traffic information systems.

L. Wischhof and A. Ebner and H. Rohling. Information dissemination in self-organizing intervehicle networks. 2005. IEEE Transactions on Intelligent Transportation Systems, 6(1):90--101.
Describes a system (SODAD) for aggregating and distributing data collected by vehicles, such as mean traffic speed along roads, over a vehicular ad hoc network. Roads are divided into fixed-length segments, and an aggregation function is applied to all data in each segement. The tuple correponding to each segment is then distributed. The system is evaluated by simulation, and a real prototype is described.

Victor Gradinescu and Cristian Gorgorin and Liviu Iftode. Adaptive Traffic Lights Using Car-to-Car Communication. 2007. in Proc. IEEE VTC, pages 21--25.
Instead of SCOOT detectors, traffic lights use the identifiers broadcast over the wireless network by all vehicles that are approaching. This allows more accurate estimation of queue lengths, and hence more optimal light cycles. Simulation of various intersections is used to illustrate reduced waiting times and pollutant emissions.

Gabriel Leen and Donal Heffernan. Expanding Automotive Electronic Systems. 2002. IEEE Computer, 35(1):88--93.
A broad overview of in-car computer technologies, with a particular emphasis on X-by-wire. Comprises a historical review and predictions of future trends. Mentions things like FlexRay, TTCAN, ByteFlight, etc.

Uichin Lee and Biao Zhou and Mario Gerla and Eugenio Magistretti and Paolo Bellavista and Antonio Corradi. Mobeyes: smart mobs for urban monitoring with a vehicular sensor network. 2006. IEEE Wireless Communications, 13(5):52--57.
Motivates the idea of using vehicles as a sensor network. Data is collected and stored on the vehicles themselves, rather than being uploaded to a central server. Data are diffused between nodes, with (normally) only the source transmitting the updates to other nodes that it meets. To obtain summaries of the data, nodes (police agents in this paper's example), emit requests. Bloom filters are used to specify which messages have already been seen by the requestor (essentially, these hash the message IDs to ascertain which messages have already been seen). Neighbours ascertain which packets the requestor is missing, according to the Bloom filter, and then return packets they are aware of. As the requestor moves through a city, it will obtain missing packets from different nodes that it meets. Performance of the system is evaluated by simulation in ns-2.

Uichin Lee and Eugenio Magistretti and Biao Zhou and Mario Gerla and Paolo Bellavista and Antonio Corradi. Efficient Data Harvesting in Mobile Sensor Platforms. 2006. in Proc. IEEE PerSeNS.
Briefly overviews the idea of vehicular sensor networks, then proceeds to describe how infostations can be used to help data harvesting (if data is stored on vehicles). This approach is justified because the quantities of data generated by vehicles are potentially huge, and hence not all of it should be uploaded. Infostations maintain indexes of those vehicles that have data about particular events. When a query is submitted that mentions that event, a list of peers is returned. When infostations are not available, a mechanism termed Mobility-Assist Data Harvesting is proposed, where vehicles broadcast summaries of what data they have collected when they encounter other vehicles. In this way, the knowledge of where sensor data resides is diffused throughout the network.

Uichin Lee and Eugenio Magistretti and Mario Gerla and Paolo Bellavista and Antonio Corradi. Dissemination and Harvesting of Urban Data using Vehicular Sensor Platforms. 2008. Submitted to IEEE Transactions on Vehicular Technology.
In-depth explanation of MobEyes the distribution protocol it uses. In particular, the delay in harvesting data summaries is analysed. The protocol is tested by ns-2 simulation. The security of the protocol is also considered.

Tamer Nadeem and Sasan Dashtinezhad and Chunyuan Liao and Liviu Iftode. TrafficView: Traffic Data Dissemination using Car-to-Car Communication. 2004. ACM Mobile Computing and Communications Review, 8(3):6--19.
Part of the e-Roads project. The concept is to have information from large numbers of cars aggregated by means of a wireless ad hoc network, and thus used to provide traffic information. This paper mainly deals with different methods of aggregating the data to ensure that the amount that must be transmitted is limited. It also considers how messages should age over time. The results are obtained by simulation, but using real GPS traces as input. The equipment deployed in the vehicle consists of a PDA connected to an OBD and a GPS receiver, along with an IEEE 802.11 card.

ISO. Road vehicles -- Controller area network (CAN) -- Part 1: Data link layer and physical signalling. 2003. 11898-1:2003, International Standards Organisation.

Washington Y. Ochieng and John W. Polak and Robert B. Noland and Lin Zhao and David Briggs and John Gulliver and Andrew Crookell and Ruthven Evans and Matt Walker and Walter Randolph. The Development and Demonstration of a Real Time Vehicle Performance and Emissions Monitoring System. 2002. in Submitted to 82nd TRB Annual Meeting.
High-level description of instrumenting a vehicle to measure emissions, correlated with position (derived from dead reckoning and GPS measurements) and engine performance (such as non-smooth braking or acceleration). Emissions sensors are distributed throughout the vehicle (inside and out). No results are provided.

Eiman Kanjo and Peter Lanshoff. Mobile Phones to Monitor Pollution. 2007. IEEE Distributed Systems Online, vol 8.
Briefly overviews the MESSAGE project, which uses mobile phones with Bluetooth GPS units and pollution sensors to record pollution levels around Cambridge, UK. Also records noise levels. Phones are carried on bicycles by delivery persons.

S. B. Eisenman and E. Miluzzo and N. D. Lane and R. A. Peterson and G-S. Ahn and A. T. Campbell. The BikeNet mobile sensing system for cyclist experience mapping. 2007. in ACM SenSys, pages 87--101.
Describes how a bicycle was fitted out with a large number of sensors, including tilt, GPS position, speed, cyclist's heart rate and galvanic skin response, and pollutant and allergen sensors. Sensors communicate with a Nokia N80 handset over Bluetooth. Bikes act as data mules, communicating with WiFi access points around the campus in a delay tolerant networking arrangement. The authors have developed a web portal for visualising the data. Real-time querying is also supported.

J.K. Hedrick and M. Tomizuka and P. Varaiya. Control issues in automated highway systems. 1994. IEEE Control Systems Magazine, 14(6):21--32.
Describes the problems associated with platooning vehicles, and describes a system for solving the issues associated with both longitudinal and horizontal platooning. In a separate article the authors estimate that highway throughput can be increased from 2000 vehicles/hour to 6000 if platooning is used. The PATH project is particularly concerned with the communication systems that must ensure that platoon join and split manoeuvres are conducted safely. A token bus architecture is suggested for when WaveLAN is used, in order to give each vehicle in the platoon a guaranteed slot to transmit its current position, velocity, and acceleration to other vehicles.

T. Shinohara and M. Okada and K. Yoshimoto. Following performance evaluation of a mechanically coupled platoon. 2001. in Proc. IEEE Vehicle Electronics Conference, pages 91--96.
The CHAUFFEUR project sought to achieve truck platooning by means of having mechanical tow bars between the vehicles: the head of the platoon would be driven as normal, with other vehicles using sensors on their tow bars to ascertain what manoeuvres to carry out. CHAUFFEUR II was a later project that further enhanced the systems from CHAUFFEUR I.

J. Burke and D. Estrin and M. Hansen and A. Parker and N. Ramanathan and S. Reddy and M. B. Srivastava. Participatory Sensing. 2006. in Proc. ACM WSW.
Introduces the concept of participatory sensing, where people's everyday devices, in particular mobile phones, are used to sense their environment, and the data uploaded to a central repository. Such projects stem from the idea that residents of a local community have a much better knowledge of patterns and changes in their local area than external specialists in particular subject areas. By using everyday devices, grass roots data collection can be encouraged, which can then have political consequences. Network attestation of location and time will be required, as will methods for users to preserve their privacy.

Shane B. Eisenman and Nicholas D. Lane and Emiliano Miluzzo and Ronald A. Peterson and Gahng-Seop Ahn and Andrew T. Campbell. MetroSense Project: People-Centric Sensing at Scale. 2006. in Proc. ACM WSW.
Brief overview of a project to make sensing human-centric, i.e. to use devices carried by humans in unconstrained mobility patternes. In addition, the devices carried and the wireless networks they use will be very diverse. A tiered architecture is proposed, where there is a centralised server tier, then service access points where sensor data can be uploaded, and sensors re-programmed as necessary. The final tier includes the sensors themselves. The concept of opportunistic emphsensor networking is introduced.

Hillol Kargupta and Ruchita Bhargava and Kun Liu and Michael Powers and Kakali Sarkar and Martin Klein and Mitesh Vass and David Handy. VEDAS: A Mobile and Distributed Data Stream Mining System for Real-Time Vehicle Monitoring. 2004. in Proc. 4th SIAM International Conference on Data Mining, pages 300--311.
Describes how a monitoring system (GPS receiver, OBD-II, iPAQ) was deployed in a vehicle to monitor performance. The data is processed onboard and then uploaded over the cellular network. The authors focus on the algorithms for this onboard analysis, including PCA and clustering. They find that the reduction in size of the data to be transmitted after this processing is considerable. They also attempt to limit their device's power consumption as it runs from its own battery.

Prashanth Mohan and Venkat Padmanabhan and Ram Ramjee. TrafficSense: Rich Monitoring of Road and Traffic Conditions using Mobile Smartphones. 2008. MSR-TR-2008-59, Microsoft Research India.
Sensing traffic conditions in the developing world is not as simple as in the developed. Many of the vehicles are not cars or trucks, but rickshaws or bicycles. Satellite navigation units are (as yet) far less common. Mobile phones are, however, far more prevalent. TrafficSense seeks to use the microphone, camera, and Bluetooth capabilites of these phones, in conjunction with a Bluetooth accelerometer, to determine traffic congestion and pothole location. A key part of the work is coping with the accelerometer not having a fixed orientation, as it is carried as normal by users. In addition, the authors consider how to reduce data transfer (and increase privacy) by processing the data on the phone before it is uploaded.

Richard Bishop. Intelligent Vehicle Technology and Trends. 2005. Artech House, 685 Canton Street, Norwood, MA, USA.
Overview of ITS.


(Inter-)Vehicular Networks

G. Liu and B. Lee and B.Seet and C. Foh and K. Wong and K. Lee. A Routing Strategy for Metropolis Vehicular Communications. 2004. in Proc. International Conference on Information Networking, pages 533--542.
Introduces the A-STAR routing policy for vehicles in a (grid layout) city. Uses buses as the backbone, as these are regular and relatively slow compared to cars. Can classify roads as links with quality dependant on number of buses and traffic conditions. Can dynamically vary metric for each road based on this information. Results in less packet loss but longer delay, as A-STAR may give move hops than other algorithms such as Greedy Perimeter Stateless Routing.

J. Blum and A. Eskandarian and L Hoffman. Challenges of Intervehicle Ad Hoc Networks. 2004. IEEE Transactions on Intelligent Transportation Systems, 5(4):347--351.
Overview of the differences between intervehicular networks and the standard ad hoc networks used in simulations. These are constraining the movement of nodes (and therefore topology), high rate of topology change due to high relative speeds (implying more frequent fragmentation), and the reduced network redundancy. The authors note that without high levels of penetration large transmission ranges are required. They also point out that dynamic source-based routing is not possible due to the route no longer existing by the time discovery has been completed.

U. Varshney. Vehicular Mobile Commerce. 2004. IEEE Computer, pages 116--118.
Short article on how increasingly wireless communications are being integrated into vehicles. This presents opportunities for location based services, intelligent transportation systems, vehicular diagnostics, etc. It also mentions Georgia Tech's traffic spies project, with 500 vehicles equipped with onboard units, reporting trip data back to a central computer.

H. Wu and J. Lee and M. Hunter and R. M. Fujimoto and R. L. Guensler and J. Ko. Simulated Vehicle-to-Vehicle Message Propagation Efficiency on Atlanta's I-75 Corridor. 2004. Georgia Institute of Technology.
A preliminary study based on simulating vehicle to vehicle communication over realistic traffic patterns and a real life topology. They conclude that the degree of connectivity and the latency for messages to be passed through the network depends on the length of the end to end path, the density of the traffic, and the penetration of the technology. They provide a figure of 15-20 p.c. penetration required in dense traffic conditions for a message to travel 6 miles with a 2 second latency. With sparse traffic (night time), even with 100 p.c. penetration, the latency is much higher, due to end to end paths not being available. They note that this could be mitigated using cellular networks as a backup, or with extra roadside nodes.

P. Bergamo and D. Maniezzo and K. Yao and M. Cesana and G. Pau and M. Gerla and D. Whiteman. IEEE 802.11 wireless network under aggressive mobility scenarios. 2003. in Proc. International Telemetry Conference.
Results of testing IEEE 802.11b between cars. When moving at high speeds in convoy mode, or in opposite directions, communication is achievable (even at a relative speed of 240 km/h). However, with semi-free space propagation (i.e. with some objects occassionally obstructing the line of sight between antennae), performance drops significantly, as for the brief occlusion periods packet loss occurs, coupled with an increase in jitter. The tests disabled the RTS/CTS protocol, as there was only one sender/receiver pair. With this enabled it remains to be seen whether connectivity can be achieved for a useful period of time at high relative speeds.

H. Wu and R. Fujimoto and R. Guensler and M. Hunter. MDDV: a mobility-centric data dissemination algorithm for vehicular networks. 2004. in Proc. ACM VANET, pages 47--56.
Describes how data would be communicated throughout a vehicular ad hoc network. MDDV provides for unicast, multicast, scan (where a message must past through all hosts in a particular area), and anycast (any one host of a particular type in a particular area). It employs trajectory-based routing, where a message has geographical destination co-oridinates. A key feature of the protocol is that it deals with mobility by allowing the destination co-ordinates to be updated as the destination host(s) move. Hence the network uses a store and forward scheme that is able to use a mechanism similar to gossip paradigms, whilst ensuring that messages can be discarded when they become too old to be of use. The simulation assumes a wireless range of 250 m at 1 Mbps.

H. Wu and R. Fujimoto and G. Riley. Analytical Models for Information Propagation in Vehicle-to-Vehicle Networks. 2004. in Proc. IEEE VTC.
Describes a model of ad hoc vehicle to vehicle network simulated to attempt to ascertain the requirements for network technologies to be used. The authors conclude that a wireless network of the 802.11b type could be used, and that a useful coverage for a message to travel over would be a radius of 6 km. They also point out that propagation takes place between isolated clusters of vehicles, and therefore that the signal will hop quickly along the cluster, and then the head of the cluster will play catch-up with the slower tail vehicles of the cluster in front. The study makes the assumption that vehicles' speeds are independent, which the authors admit is not realistic. Consequently, when comparing their predictions to traffic simulations, they comment that the accuracy is not particularly good, mainly due to not taking into account vehicles' speeds being interdependent (traffic jams, platooning, etc.).

S. Goel and T. Imielinksi and K. Ozbay. Ascertaining viability of WiFi based vehicle-to-vehicle network for traffic information dissemination. 2004. in Proc. IEEE ITS, pages 1086--1091.
Discusses whether a vehicle to vehicle ad hoc network is a viable mechanism for information dissemination over a city. The authors conclude that such a network is possible, but that it requires Wifi coverage to have a radius of 100m and 3 to 10 p.c. penetration. They do not consider what routing algorithm is used, so as to not affect their results. They do not explicitly state what the density of traffic in the simulation is, nor how many simulations were performed (and their standard deviation). One question not covered is what happens to the proposed scheme when there is more than one incident causing congestion; the algorithm for information propagation causes vehicles to only propagate the information concerning the most congested road, even if there is more than one value that indicates severe congestion. Finally, it is not clear whether the system would work with 100 p.c. penetration, (would there be oscillation over routes, etc.?). Many of these questions are likely to have not been answered due to space constraints in the original paper.

B. Hughes and R. Meier and R. Cunningham and V. Cahill. Towards Real-Time Middleware for Vehicular Ad Hoc Networks. 2004. in Proc. 1st ACM Workshop on Vehicular Ad Hoc Networks.
Describes the need for middleware to be space-aware. If hard real time guarantees are to be obtained for intervehicular ad hoc networks, applications must be aware of the space constraints they require to ensure such guarantees are met. Conversely, the middleware should be able to infer from the proximity of other nodes the guarantees that can be made. This idea is termed the space-elastic model. This paper appears to be a contraction of a more detailed description.

M. Bechler and W. J. Franz and L. Wolf. Mobile Internet Access in FleetNet. 2003. in 13. Fachtagung Kommunikation in Verteilten Systemen (KiVS).
Describes the usage of an in car Mobile IPv4 proxy to interface with the Fleetnet Mobile IPv6* cloud (a lightweight version of the MIPv6 standard). Vehicles form an ad hoc network, and access the Internet via the ad hoc network to internet gateways along the sides of the road. An optimised version of TCP is used (similar to that provided in the WAP 2.0 specification). A proxy within the Internet allows communication between the Fleenet cloud and the remainder of the Internet, and dealing with the switching and signalling required as vehicles change their internet gateway point. Note that the adjusted version of MIPv6 pushes the Foreign Agent role back onto the Internet proxy, to avoid the mobile node being the MIP tunnel end point. The reason for this is that nodes in Fleetnet have constant addresses, rather than obtaining a new Care Of Address from each Internet gateway. Hence forwarding from old CoAs to new ones by the FA is not appropriate.

E. Huang and W. Hu and J. Crowcroft and I. Wassell. Towards Commercial Mobile Ad Hoc Network Applications: A Radio Dispatch System. 2005. in Proc. ACM MobiHoc, pages 355--365.
Describes the simulation of an ad hoc network for taxi dispatch. Good results, but simulation is of a 5 km by 5 km Manhattan Grid. Radio propagation is assumed to be a maximum of 474 m in a straight line, and 150 m if round a corner. Outages tend to be between 0 and 4 seconds as nodes stray out of ad hoc coverage. Only about 40 p.c. of nodes are connected to the dispatcher at any one time. This network is therefore only suitable for small delay tolerant applications. The authors justify this approach financially in comparing it to commercial dispatch radio systems. The simulation varies the number of taxis with the system, and adds in some congestion (e.g. 300 taxis, 29,700 other cars) to show that the performance is not significantly affected.

H. Wu. Analyis and Design of Vehicular Networks. 2004. Georgia Institute of Technology.
Summarises the work that the group at Georgia Tech. have carried out on the design and feasibility of vehicular ad hoc networks. It mentions simulations carried out with realistic traffic flows, and also briefly contrasts the usage of wireless LAN or cellular technologies.

David N. Cottingham. Possible Directions for Intervehicular Networking Research. 2004. Unpublished. (Internal Digital Technology Group Report)

Tao Ye and Hans-Arno Jacobsen and Randy H. Katz. Mobile Awareness in a Wide Area Wireless Network of Info-Stations. 1998. in Proc. MobiCom, pages 109-120. ACM.
Describes the map-on-the-move application, where the speed and destination of a vehicle are used to intelligently prefetch different levels of map detail to aid the driver. Depending on the speed profile (local, highway or combined), a greater or lesser number of map segments (with lesser or greater detail respectively) are prefetched. Cache size appears to have no effect, as the usage of map segments is essentially FIFO (with the occasional doubling back). Intelligent prefetch allows less bandwidth to be used than naive prefetch (which attempts to simply prefetch as much as possible). The authors test the application using an ns-2 simulation, and find that a network of fixed, high-bandwidth wireles infostations with short range but high density gives a lower user response time than a sparse but longer range network.

H. Wu and M. Palekar and R. Fujimoto and R. Guensler and M. Hunter and J. Lee and J. Ko. An Empirical Study of Short Range Communications for Vehicles. 2005. in Proc. ACM VANET, pages 83--84.
Short description of V2V and V2R (Vehicle to Roadside) communication experiments on a highway in Atlanta. Curves and bridges were found to affect the packet reception success rate. Range of 802.11b was 150 m for a consistent connection, up to 600 m sporadic connectivity (for V2V). A test involving V2R via another vehicle to increase the range was also performed. This appears to mitigate some of the effects of non-LOS situations between the sender and the roadside receiver.

J. Tian and L. Han and K. Rothermel and C. Cseh. Spatially Aware Packet Routing for Mobile Ad Hoc Inter-Vehicle Radio Networks. 2003. in Proc. IEEE ITSC, pages 1546--1552.
Describes Spatially Aware Routing (SAR) and compares it to the GPSR ad hoc routing protocol. GPSR uses greedy forwarding to send packets to their destination, which may mean, in the case of a road fork, that packets travel a significant distance and then cannot be further forwarded (a topology ''black hole''). SAR instead assumes the existence of a location service that allows the locations of all vehicles to be known (N.B. this is a highly complex problem). It also assumes that all vehicles are aware of the road topology, translated into a weighted graph representation. Using this graph, a packet to be sent has a list of geographical vertices (waypoints) placed as a route in its header. Vehicles then forward to the next hop along that road topology route. If a next hop is not available, the packet may be buffered, or in future implementations could use greedy routing. The protocol is compared to GSR by simulation, using 100 vehicles on a relatively simple topology. Vehicles have random speeds and random destinations. Also, the paper equates radio range to vehicle density, giving results such as for greater ranges the number of hops per packet first increases (due to more packets being delivered with ``increased density''), and then decreases (due to greater ranges implying fewer hops needed to cover a certain distance?). Packet delivery ratios are higher than for GPSR, at the expense of greater delay if buffering is used.

Hong Bong Kim and Marc Emmelmann and Berthold Rathke and Adam Wolisz. A Radio over Fiber Network Architecture for Road Vehicle Communication Systems. 2005. in Proc. IEEE VTC.
Describes a MAC protocol for mm wave radio from vehicles to roadside. The frequencies used would be in the 36 or 60 GHz bands, with very short propagtion ranges. Base stations would therefore be needed every 100 m or so. These would be simple radio to optical coverters, which lead back to a central control station. The base stations attached to one control station would form a virtual cell zone, where handovers at the MAC level can be co-ordinated. The authors note that inter-control station handover is more difficult, as it takes place at the IP level. This type of technology is expected to support fast handovers at bandwidths of 2-10 Mbps per mobile host.

Kohshi Abe and Teruo Tobana and Takayuki Sasamori and Hisao Koizumi. A Study on a Road-Vehicle Communication System for the Future Intelligent Transport Systems. 2000. in Proc. 7th IEEE International Conference on Parallel and Distributed Systems: Workshops.
Evaluates the radio propagation characteristics of using mm wave roadside communications. The authors give a table comparing the propagation value for various different frequencies (2, 5.8, 24, 30 GHz), and also examine the fading effect of rain. They conclude that this technology is feasible from a radio channel point of view.

W.-S. Soh and H. Kim. Dynamic Bandwidth Reservation in Cellular Networks Using Road Topology Based Mobility Predictions. 2004. in Proc. IEEE INFOCOM.
Describes how handoffs can be predicted on the basis of previous vehicle behaviour. A database is maintained of the locations at which handoffs have been performed, and what the current and one-previous road segments that the vehicle was on were. This enables a probabilistic estimate to be made of what base station a connection will handoff to, and at what time, without any assumptions about the cell boundary. This estimation can be further (slightly) improved by constraining the paths that can be followed by the vehicle using a road topology map. Base stations are then able to reserve appropriate amounts of bandwidth for connections that are about to handed off. Base stations must frequently (5 second intervals) recalculate the handoff probabilities of all nodes, based on their positions, as well as communicate with other base stations to update them of the predictions. Simulations show that such an approach does improve the fraction of forced terminations of calls and call rejection rates. It remains to be seen if this is a scalable approach in reality.

A. Ebner and H. Rohling and L. Wischhof and R. Halfmann and M. Lott. Performance of UTRA TDD Ad-Hoc and IEEE 802.11b in Vehicular Environments. 2003. in Proc. IEEE VTC.
Compares, by simulation, the use of a modified version of the UMTS Terrestrial Radio Access Time Division Duplex (UTRA) and IEEE 802.11b for intervehicular communication. Concludes that the use of UTRA is far superior due to its stronger coding techniques. In particular, in an urban environment (i.e. with relative speeds up to 120 km/h), IEEE 802.11b fails to deliver performance of less than 1 p.c. packet loss.

J. Yin and T. ElBatt and G. Yeung and B. Ryu and S. Habermas and H. Krishnan and T. Talty. Performance Evaluation of Safety Applications over DSRC Vehicular Ad Hoc Networks. 2004. in Proc. ACM VANET, pages 1--9.
Describes a simulation of the DSRC protocol for use in collision avoidance mechanisms for vehicles. DSRC uses the 5.9 GHz frequency, and is very similar to the IEEE 802.11 specification. The MAC is CSMA/CD, as in 802.11a, with the parameters tweaked for high speed. Doppler effects at high speeds are hypothesised to negligible. Delays are highly variable, depending on line of sight communication, and range (closer together implies multipaths combine to form a single complex signal). The bit error rate is found, by simulation, to vary significantly with relative speed of the transmitter and receiver. High vehicle speeds mean that the channel characteristic may change in the middle of a packet transmission, which increases the BER as training sequences only exist at the start of each packet. A simulation is carried out showing the feasibility of DSRC for safety critical applications that require a particular QoS of delay.

R. Gass and J. Scott and C. Diot. Measurements of in-motion 802.11 networking. 2006. in Proc. IEEE WMCSA.
Describes how WiFi performance varies at different speeds (0-75 mph), with varying packet sizes, and using TCP, UDP and web application traffic. The setup was a car driving down a straight road in the desert, past an access point. The authors conclude that access in such scenarios is possible, but there is much work to be done to further optimise it.

J"org Ott and Dirk Kutscher. Drive-thru Internet: IEEE 802.11b for ``Automobile'' Users. 2004. in Proc. IEEE INFOCOM.
Detailed experimental evaluation of IEEE 802.11b performance at speeds of up to 180 km/h on an uncongested autobahn. Examines TCP and UDP performance for both fixed and mobile senders. With a fixed sender that uses a wired ethernet connection to a wireless AP, the packet loss and effective throughput is lower than with a mobile sender, as the fixed sender does not get feedback from the wireless interface it is indirectly using, and therefore does not adjust its send rate accordingly. Throughputs are not significantly affected at different speeds, and were approximately 4.5 - 5 Mbps. APs had a range of approximately 200 m diameter, meaning that a total TCP session transfer of 6, 5, and 1.5 MBytes can be achieved at 80, 120, and 180 km/h respectively, using a mobile sender. With a fixed sender this reduces significantly (factor of 2 at 120 km/h). For a fixed sender, TCP performs better than UDP, as it appears that its congestion avoidance mechanism alleviates the AP overloading problem caused by the sender not having its own wireless interface. Authors first carried out measurements of their own equipment in a static scenario to obtain reference measurements.

Jatinder Pal Singh and Nicholas Bambos and Bhaskar Srinivasan and Detlef Clawin. Wireless LAN Performance Under Varied Stress Conditions in Vehicular Traffic Scenarios. 2002. in Proc. IEEE VTC.
Results from experimental evaluation of 802.11b for intervehicular communication in urban, semi-urban, and freeway environments. Semi-urban (40 miles/h) is found to be best. Packet size is varied, with small packet sizes being better at larger separations (greater loss rate). Packet size should therefore be changed depending on distance the packet is to travel. Throughput decreased with absolute and relative speed. Throughput did not appear to vary significantly with distance for the suburban case, but did for the freeway case (though the former was only over 200 m, the latter began decaying at approx. 300 m. Former used larger packet size). Range to achieve communication was found to be 1000 m in a suburban environment.

J"org Ott and Dirk Kutscher. The ``Drive-Thru'' Architecture: WLAN-based Internet Access on the Road. 2004. in Proc. IEEE VTC.
Gives experimental results for 802.11g vehicle to roadside communication using external antennae. Ranges of up to 2.5 km in diameter can be achieved along an autobahn. Bandwidths of up to 54 Mbps with consequent TCP throughputs of 15 Mbit/s at 80 km/h result in cumulative transfers of up to 110 MBytes. The authors then describe the Drive-thru architecture, where proxies are used to rendezvous points in the network for TCP connections destined for the vehicles. Intermittent connectivity is therefore dealt with in the network, breaking end to end TCP semantics.

J"org Ott and Dirk Kutscher. Why Seamless? Towards Exploiting WLAN-Based Intermittent Connectivity on the Road. 2004. in Proc. TERENA Networking Conference.
Overviews the purpose and general structure of Drive-Thru Internet. Gives various detailed application examples that do not necessarily rely on always-on connectivity (e.g. e-mail, long-term push to talk audio/video, distributed object synchronisation, telematics), and can instead be adapted to intermittent connectivity by means of proxies. With vehicular connectivity connections are infrequent but high bandwidth, as compared to ubiquitous yet very low bandwidth for cellular networks. This difference requires well known application protocols to be adapted/proxied to provide a reasonable service.

J"org Ott and Dirk Kutscher. A Disconnection-Tolerant Transport for Drive-thru Internet Environments. 2005. in Proc. IEEE INFOCOM.
Decribes in detail the architecture of the drive-thru Internet proxy, Persistent Connectivity Management Protocol (PCMP). This allows TCP connections to be wrapped into PCMP TCP connections between the mobile host and a proxy. The proxy acts on behalf of the mobile host even when it has no connectivity, and buffers TCP packets for it. When connectivity is resumed, the proxy can then ``replay'' the connection to the mobile host. In this way TCP connections can be effectively paused and resumed. Plugins for particular protocols have been developed for PCMP to optimise its function for, e.g., POP3 or SMTP. This allows more aggressive compression, and for the proxy to carry out connection setup to the e-mail server when the mobile host is out of range (a process that can take up to 60 seconds). This therefore improves user experience. Real-world proof of concept trials have been performed with good results.

J"org Ott and Dirk Kutscher. Exploiting Regular Hot-Spots for Drive-thru Internet. 2005. in Proc. KiVS.
Ongoing research from the drive-thru Internet project, measuring times for using standard hotspots with DHCP and authentication procedures (e.g. automated filling in of web pages for hotspot authentication). Various types of link are tested between the hotspot and the server the mobile node is accessing (ISDN, DSL, satellite). The authors have implemented a ``smart client'' that is able to detect wireless hotspots when they come into range, perform DHCP, and automatically authenticate if it can infer what details to use from the ESSID, login page, etc.. The entire process completes in under 10 seconds, which is the entry phase (low throughput), as the vehicle approaches the access point, leaving the entire high bandwidth production phase for data transfer.

J"org Ott, Dirk Kutscher. A Mobile Access Gateway for Managing Intermittent Connectivity. 2005. in Proc. 14th IST Mobile & Wireless Communication Summit.
Describes the Drive-thru Mobile Access Gateway, which acts as a proxy for connections from a vehicle to access points that the vehicle passes. Such a DT-MAG must detect and automatically configure itself to use each access point, including performing suitable authentication. A PCMP client (see prev. ref.) is also integrated into it, as is a caching function. Essentially, the DT-MAG exists as a single proxy in a vehicle to which multiple in-vehicle users connect. This raises issues of trust. A prototype implementation has been developed, and lab. tested.

J"org Ott and Dirk Kutscher and Mark Koch. Towards Automated Authentication for Mobile Users in WLAN Hot-Spots. 2005. in Proc. IEEE VTC.
Describes a mechanism for automating authentication to WLAN hotspots as a vehicle passes through them. A device must first detect the hotspot, and determine whether its ESSID indicates it is privately owned (in which case it is discarded). Then the device must either perform IEEE 802.11i authentication, or DHCP and then Universal Access Method web-based authentication. All of these stages must be automated. In addition, a database of access points should be maintained, with the credentials required to use them. Various unimplemented extensions are proposed. The system is trialled in a city and shows promising results, though requires further extensions to cope with the multiplicity of authentication methods present.

A. Baig and M. Hassan and L. Libman. Prediction-Based Recovery from Link Outages in On-Board Mobile Communication Networks. 2004. in Proc. IEEE GLOBECOM.
Assumes that a method of predicting when the network connection from a vehicle will become unavailable, and then uses a Markov model to simulate how freezeTCP will perform depending on the accuracy of the prediction method. FreezeTCP is notable in that the receiver must have a modified TCP stack, but not the sender. When a connection loss is predicted, the receiver sends a zero size window advertisement, to ``shutdown'' the connection. When connectivity is restored, it sends a non-zero window advertisement, and a triple-ACK to cause the retransmission of any lost packets. In this way it is hoped that timeouts do not occur, and connections can be maintained even with intermittent connectivity.

Tom Goff and James Moronski and D. S. Phatak and Vipul Gupta. Freeze-TCP: a True End-to-end TCP Enhancement Mechanism for Mobile Environments. 2000. in Proc. IEEE INFOCOM, pages 1537--1545.
TCP contains a mechanism for the receiver to specify a zero size advertised window. This causes the sender to cease transmission, and instead periodically probe the receiver to see whether the window has increased in size. Freeze TCP takes advantage of this approach by assuming a mechanism for predicting disconnections are about to occur (e.g. by signal strength trends), and closing the window before the disconnection. On reconnection, to avoid the potentially long interval between zero window size probes (from the sender), it sends a triple ACK for the last data packet, containing the new window size. In this way only the receiver's TCP stack need change, and end-to-end semantics are preserved.The authors evaluate their approach using a real testbed, and compare Freeze TCP to standard TCP, showing up to 20 to 50 p.c. decreases in file transfer times for a host 2 hops away. When Freeze TCP performs badly, its impact is negligible: up to 3.8 p.c. worse transfer times than standard TCP. The authors conclude by pointing out that methods of inferring that disconnections are about to occur are required for this scheme to work.

P. Murphy and E. Welsh and J. P. Frantz. Using Bluetooth for short-term ad hoc connections between moving vehicles: A feasibility study. 2002. in Proc. IEEE VTC, pages 414--418.
The authors use several bluetooth devices with external antennae to test the distribution of device discovery times, and also the range over which they can communicate. They conclude that Class I devices have a sufficient range that a vehicle moving at 100 km/h past a stationary access point would have a connection time of 18 seconds (at most). They also perform simulations where they adjust the Bluetooth baseband protocol to decrease the backoff time for a node to reply to a master station query.

Welsh, E. and Murphy, P. and Frantz, J.P. A mobile testbed for GPS-based ITS/IVC and ad hoc routing experimentation. 2002. in Proc. International Symposium on Wireless Personal Multimedia Communications, pages 796--800.
Shows results for ranges with 4 different Bluetooth antennae. Describes deployment of embedded devices with Bluetooth modules on shuttle buses round the Rice University campus, transmitting GPS readings back to fixed Bluetooth base stations. Interestingly, the coverage area for communication (once a link is established), is approximately 50 m greater in diameter than that for discovery.

Christoph Schroth and Florian Doetzer and Timo Kosch and Benedikt Ostermaier and Markus Strassberger. Simulating the traffic effects of vehicle-to-vehicle messaging systems. 2005. in Proc. ITS Telecommunications.
Briefly compares the different choices of traffic simulator, and recommends CARISMA as the best solution. This is interfaced with ns-2 to allow VANET simulations that take into account a simple radio propagation model with realistic traffic flows. The tool is used to ascertain the effects of message passing to notify vehicles of an accident that they should route around. The authors note that in most cases the average speed of vehicles with the message passing system increased, but that with high traffic densities many vehicles began using the alternative route(s), which also became congested. This negative impact should also be taken into account when examining VANET applications.

Stephan Eichler and Benedikt Ostermaier and Christoph Schroth and Timo Kosch. Simulation of Car-to-Car Messaging: Analyzing the Impact on Road Traffic. 2005. in Proc. IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, pages 507--510.
Fairly similar paper to the previous one in this list. Gives four different coupling methods tried for CARISMA to ns-2. The chosen method involves the traffic simluator being one simulation period ahead of the network simulator. This allows the network simulator to be given the true ``next'' position of the vehicle, thus ensuring that the simulations remain in synchronisation. The simulation uses a warning message to communicate to equipped vehicles that they should avoid a particular area. This means that once the area is clear, unequipped vehicles may be able to pass through it at a greater average speed than other vehicles that are on alternative routes that are now congested. Hence such a message passing system is not always optimal.

G. Bilchev and D. Marston and N. Hristov and E. Peytchev and N. Wall. Traffimatics -- Intelligent Co-operative Vehicle Highway Systems. 2004. BT Technology Journal, 22(3):73--83.
Gives an overview of the vision of a connected car. The two strands are vehicle to highway communication and vehicle to vehicle communication. Various technologies are overviewed, such as the various buses for in-vehicle communication, and wireless technologies for communication to/from the outside. Market barriers and opportunities are also considered.

Marina Aguado and Eduardo Jacob and Purficaci'on S'aiz and Juan Jos'e Unzilla and Maria Victoria Higuero and Jon Mat'ias. Railway Signaling Systems and New Trends in Wireless Data Communication. 2005. in Proc. IEEE VTC.
Brief survey of various wireless technologies used for train signaling. Few references. Mentions WiMax and 802.20 as possible technologies, but no detail.

Timo Kosch. Technical Concept and Prerequisites of Car-to-Car Communication. 2002. BMW Group Research and Technology.
Describes the approach of the Car-to-Car (C2C) consortium to intervehicullar communication. Greedy, store and forward, networking is used, with geocasting. The radio architecture is 802.11p, but must also provide 802.11a/b/g for backward compatibility with existing infrastructure. The author describes various security mechanisms for ensuring messages are authentic and yet anonymous, including PKI certificates and Zero Knowledge Proofs.

Amer Aijaz and Bernd Bochow and Florian D"otzer and Andreas Festag and Matthias Gerlach and Rainer Kroh and Tim Leinm"uller. Attacks on Inter Vehicle Communication Systems -- an Analysis. 2006. in Proc. International Workshop on Intelligent Transportation.
Surveys possible attacks on intervehicular communication as envisaged in the Network-on-Wheels project. Considers Car-to-Car, Car-to-Infrastructure, and Car-to-Home applications. False messages and tampering are considered, as are the ability to exhaust a vehicle's battery. The need for plausibility checks on incoming data (though how this is really done in practice is not mentioned), and a trusted platform is identified.

Martin Grade and Klaus Meier and Bernd Rech and Andreas L"ubke. Physical IEEE 802.11 -- Measurements in Automotive Environment. 2005. in Proc. ITS.
Details how the signal strength of an IEEE 802.11b signal varies radially around a vehicle, for two different cars, with the antenna in two positions. Internal anntennas are more likely to suffer connection interruptions due to obstructions. Signal strength did not vary with different speeds of passing an AP. Connection times for 5 GHz were shorter than for 2.4 GHz. 5.2 GHz had 350 m range, (2.4 had > 400). Two packet sizes were used. Larger packet sizes appear to imply shorter connection times. Presentation also shows a graph of signal strength as vehicles overtake a lorry on a motorway.

Connie Ribeiro. Bringing Wireless Access to the Automobile: A Comparison of Wi-Fi, WiMAX, MBWA and 3G. 2005. in 21st Computer Science Seminar. Rensselaer at Hartford University.
Briefly overviews various wireless technologies for vehicular communication, including MBWA and WiMAX. Performance figures are derived from specifications rather than experimental data.

Florian Doetzer and Florian Kohlmayer and Timo Kosch and Markus Strassberger. Secure Communication for Intersection Assistance. 2005. in Proc. International Workshop on Intelligent Transportation.
Describes a protocol using Diffie-Hellman key exchange to allow emergency vehicles to change traffic light timings. The parties mutually authenticate themselves, to avoid man in the middle attacks, and by encrypting their credentials, emergency vehicles do not reveal their identities to any outsiders. However, the scheme relies on a PKI infrastructure existing for all traffic lights and emergency vehicles, which in my opinion may not be scalable. The authors also calculate the time the protocol would take to run, and hence what range of radio communication would be necessary. They give a figure of 650 m, which is not feasible with 802.11 protocols, therefore they suggest using repeaters. They do not consider whether these repeaters could simply drop their messages, but do argue that they such repeaters cannot violate the integrity or authenticity of the messages. Replay attacks are prevented by using nonces or timestamps.

Moske, M. and Fussler, H. and Hartenstein, H. and Franz, W. Performance measurements of a vehicular ad hoc network. 2004. in Proc. IEEE VTC, pages 2116--2120.
Describes the performance of 802.11b ad hoc networking in the Fleetnet project, using real cars as nodes. Multihop performance without movement were tested, with throughput decreasing with number of hops. The typical 802.11b effect of increasing send rate eventually resulting in decreased throughput (due to increased retransmissions taking up more channel time) was seen. A mobile three hop scenario around a University campus was investigated, and the authors conclude there is much work to be done in the fields of dealing with unstable links (and notifying the onboard router that a link has failed), as well as the update rate of the location service used for position based forwarding.

Andreas Festag and Holger F"ussler and Hannes Hartenstein and Amardeo Sarma and Ralf Schmitz. Fleetnet: Bringing car-to-car communication into the real world. 2004. in Proc. World Congress on ITS.
Overviews the results of the Fleetnet ad hoc car-to-car communication project. The authors point out that more research is needed into dynamic environmental factors that change the radio channel. Results are described in the paper published in the 59th IEEE VTC (above).

Marc Torrent-Moreno and Daniel Jiang and Hannes Hartenstein. Broadcast reception rates and effects of priority access in 802.11-based vehicular ad-hoc networks. 2004. in Proc. ACM VANET, pages 10--18.
Evaluation of 5.9 GHz 802.11a. Simulations ascertain the packet delivery rate, when there are many nodes all broadcasting, as would be the case on a highway in a VANET. The authors examine how the usage of a message prioritisation similar to the 802.11 EDCA mechanism affects message delivery from the single node sending out priority messages. Simulations are performed comparing how this varies when using the two-ray-ground model for radio propagation (unit disc) and the random variable Nakagami fading channel. Th e latter is a much more realistic channel model, and has a significant effect. Packets sent with a high priority had a greater probability of reception. With a non-deterministic radio model, the degree of synchronization between nodes decreases, and hence the hidden terminal problem becomes more acute, a significant point.

C. Steger and P. Radosavljevic and P. Frantz. Performance of IEEE 802.11b Wireless LAN in an Emulated Mobile Channel. 2003. in Proc. IEEE VTC.
Examines the wireless 802.11b channel and how various different interface cards perform at different speeds, with regard to throughput and packet loss. This is conducted using a channel emulator. The time dispersion of a channel indoors is normall small, due to the objects off which the signal reflects being close together. Also, when users travel fast through Rayleigh fading (fast fading), the signal level changes rapidly, causing the receiver's adpative capabilities to fail (too little time to achieve equalisation). The authors find that performance degrades at high speeds, and note that performance also degrades when equipment from different manufacturers is used as the sender and receiver.

Karl-Michael Aldinger and Rainer Kroh and Hendrik Fuchs. The DRiVE IFA Demonstrator -- Test Car and Services. 2001. in Proc. Mobile Summit.
Brief overview of the DRiVE test car, with the various services it offers. This has connnections to the various types of cellular network, and also several to broadcast networks such as DVB and DAB. The services offered are a location-based city guide, navigation, a mobile office, and a coarse method of viewing congestion by obtaining live images from traffic cameras.

Gehlen, Guido and Weiss, Erik and Lukas, Sven and Rokitansky, Carl-Herbert and Walke, Bernhard. Architecture of a Vehicle Communication Gateway for Media Independent Handover. 2006. in Proc. International Workshop on Intelligent Transportation, pages 205--209.
Describes the general architecture for an in vehicle gateway that can utilise various different types of wireless network. The architecture is based on IEEE 802.21 for media independent handover. The authors have implemented a system that takes account of user preferences in order to select the most appropriate network to use. Their architecture allows further network interfaces to be easily added. They do not evaluate their system. They posit that the usage of coverage maps may be useful in deciding which network to connect to, and have deployed a GPS sensor to help them obtain these.

Duan-Shin Lee and Yun-Hsiang Hsueh. Bandwidth-reservation scheme based on road information for next-generation cellular networks. 2004. IEEE Transactions on Vehicular Technology, 53(1):243--252.
Aims to reserve bandwidth in cells that a mobile node is likely to move into next. There is a trade off between reserving bandwidth for calls that are in progress that may move to the current cell, whilst keeping anough bandwidth free for new calls to be admitted. The authors present a reservation scheme that uses the road topology and the positions of traffic lights (random delays) to probabilistically reserve bandwidth in different cells. This compares well to schemes where the next cell a node will travel into is based on straight line paths with constant speeds. Also, in traditional schemes, both new and handoff calls will use initially use free, unreserved bandwidth, then once this is exhausted handoff calls will use the fixed amount of reserved bandwidth. With the RMCR scheme proposed in this paper, when a handoff call enters a cell, the amount reserved for it is subtracted from the free reserved amount, thus optimising bandwidth use. The paper presents a good deal of mathematics on prediction of paths.

Dirk Kutscher and J"org Ott. Service Maps for Heterogeneous Network Environments. 2006. in Proc. Mobile Data Management Conference, pages 27--37.
Describes the possible usage and requirements of service maps that describe the locations and services on offer from wireless providers. Brief calculations are provided that show that such service maps are of small size and therefore easily distributed. Characteristics of the services are limited to general statements, and in the most detailed case are such things are channels or bandwidth usage/availability.

David Gunton. MILTRANS -- Millimetric Transceivers for Transport Applications. 2006. in Proc. IEE Conference on Automotive Electronics, pages 251.
Overview of the MILTRANS IVC system. This uses the 63-64 GHz range, with an 802.11a MAC (i.e. BPSK or QPSK). Test drives have shown a range of up 170 m (140 in fog), and with speeds of up to 210 km/h. Data rates of 18 Mb/s. Vehicle to infrastructure and vehicle to vehicle tested. This is linked to the ISO CALM (Continuous Communications Air Interface for Long and Medium Range Applications) which seeks to provide a routing architecture that may use multiple communications technologies (such as MILTRANS) in a dynamic and optimal fashion.

David Hadaller and Srinivasan Keshav and Tim Brecht. MVMAX: Improving Wireless Infrastructure Access for MultiVehicular Communication. 2006. in Proc. ACM SIGCOMM Workshops.
Aims to solve the issue of the 'performance anomaly' experienced by 802.11-based networks, where if a station transmits at a low throughput modulation scheme, it will take longer to transmit a fixed size packet compared to a higher modulation scheme. This results in non-optimal allocation of resources. The authors propose MV-MAX, a scheme which allocates the channel to the vehicle with the greatest SNR. This is unfair over short periods, but fair over longer ones, provided that vehicles experience the same SNR profile. Simulations are validated using single-vehicle real-life data. MV-MAX trades off fairness with optimaility. The authors intend to implement a prototype of the system.

Vladimir Bychkovsky and Bret Hull and Allen K. Miu and Hari Balakrishnan and Samuel Madden. A Measurement Study of Vehicular Internet Access Using In Situ Wi-Fi Networks. 2006. in Proc. ACM MobiCom. Los Angeles, CA.
Describes the experiences of the CarTel project in collecting data on WiFi networks that were encountered on the normal journeys of 9 cars over approximately a month, in the Cambridge, MA area. A total of 32,111 unique APs were seen. Of these, 1023 allowed Internet connectivity. DHCP was observed to introduce several seconds of latency into the association process; this was mitigated somewhat by the use of a cache that respected lease times. Connectivity times were of the order of 20 seconds, on average. Speed did not appear to affect the performance of the connections. The distance covered by a single AP was 96 metres on average, rising to 300 m in 10 p.c. of cases. For APs providing end-to-end connectivity, TCP connections were initiated to a central server. These were at 1 Mb/s (raw link bandwidth), achieving 10 to 50 KBytes/s, with a median transfer of 216 KBytes. The authors then discuss how a community WiFi deployment would function from a technical and financial point of view, and recommend optimisations for increasing the performance of the network.

David N. Cottingham and Ian J. Wassell and Robert K. Harle. Performance of IEEE 802.11a in Vehicular Contexts. 2007. in Proc. IEEE VTC, pages 854--858.
A key component of intelligent transportation is the provision of adequate network infrastructure to support vehicle-to-vehicle and vehicle-to-roadside communication. In this paper we report on performance evaluations carried out using the IEEE 802.11a protocol at 5.2 GHz between a moving vehicle and a fixed base station. We concentrate our evaluation on realistic urban speeds and environments, observing that performance at very low speeds is degraded due to the presence of null zones. We vary the modulation scheme and analyse the spread of resulting throughputs. Our results have implications for multimedia and other real-time applications that will utilise vehicle-to-roadside connectivity.

Vikas Kukshya and Hariharan Krishnan and Christopher Kellum. Design of a System Solution for Relative Positioning of Vehicles Using Vehicle-to-Vehicle Radio Communications During GPS Outages. 2005. in Proc. IEEE VTC, pages 1313--1317.

Yin Fern Ko and Moh Lim Sim and Maziar Nekovee. Wi-Fi Based Broadband Wireless Access for Users on the Road. 2006. BT Technology Journal, 24(2):123--129.
Overviews the various ways of providing network access for vehicles. Mentions Wi-Fi mesh, WiMAX, 3G, 4G, 802.11p. The authors consider Wi-Fi mesh to be a good candidate for the task, but note that there are the problems of fair bandwidth allocation, co-channel interference minimisation, and the need for call admission control. They also mention a traffic simulator they have developed for testing the peformance of wireless technologies in realistic traffic scenarios.

David N. Cottingham and Jonathan J. Davies. A Vision for Wireless Access on the Road Network. 2007. in Proc. 4th International Workshop on Intelligent Transportation, pages 25--30.
Metropolitan area wireless networks are currently being deployed in major cities around the world, whilst in tandem there has been much research into vehicle-to-roadside communication. New applications for vehicular networking become possible as blanket, low-cost, wireless networks begin to exist across cities, resulting in Connected Traffic, rather than isolated Connected Cars. We examine the nature of these applications, classifying them according to their distinguishing characteristics, and discussing their demands on the network architecture required. We then outline our current work on the language and compiler support required for the application developers in order to deploy applications on such networks. Also, we discuss current work on coverage mapping algorithms in order to enable better prediction of network conditions. Finally, we describe our forthcoming WiMAX deployment to create a larger-scale vehicular access testbed, and the Sentient Vehicles project. This work has important implications for the implementors of wireless networks on roads, and for developers designing applications to harness such connectivity.

Matthias Wellens and Burkhard Westphal and Petri M"ah"onen. Performance Evaluation of IEEE 802.11-based WLANs in Vehicular Scenarios. 2007. in Proc. IEEE VTC, pages 1167--1171.
Evaluation of 802.11b/g (and briefly 802.11a) for vehicle-to-vehicle and vehicle-to-roadside communication. The authors examine the UDP and TCP goodputs at high speeds (120 km/h) and find that speed appears to not affect the performance. They also find that for vehicle-to-vehicle communication TCP throughput is inversely proportional to intervehicular distance.

Maziar Nekovee and Benedikt Bjarni Bogason. Reliable and Efficient Information Dissemination in Intermittently Connected Vehicular Adhoc Networks. 2007. in Proc. IEEE VTC, pages 2486--2490.
Message propagation over VANETs with the idea that broadcasts should be carried out by those at the edges of clusters of vehicles. They then wait until they are near another cluster (or join) it, and propagate the message to that cluster. Simulations show that this results in better dissemination than naive broadcasting (and subsequent immediate discard).

Moez Jerbi and Sidi-Mohammed Senouci and Mahmoud Al Haj. Extensive Experimental Characterization of Communications in Vehicular Ad Hoc Networks Within Different environments. 2007. in Proc. IEEE VTC, pages 2590--2594.
Various experiments involving video over an 802.11b ad hoc intervehicular network. These were of cars following each other in highway and suburban environments, as well as passing by each other, and a roundabout scenario. The authors conclude that multimedia transmission between cars is possible.

Almudena Diaz and Pedro Merino and Laura Panizo and Alvaro M. Recio. Evaluating Video Streaming Over GPRS/UMTS Networks: A Practical Case. 2007. in Proc. IEEE VTC, pages 624--628.
Experiments of H.263 streaming using RTCP over real UMTS and GPRS networks. One experiment utilised a car travelling at an average of 100 km/h. For the static scenario, the range of delays was 95 p.c. within 120 ms (end to end), and 99 p.c. within 800 ms, showing a wide distribution. In the vehicular scenario, cell handover with GPRS is a hard hand off, which causes packet loss, whilst in UMTS the mobile node is able to listen on two channels simultaneously and hence achieve a soft handoff. The authors conclude that video streaming over UMTS networks is possible.

Bo Xu and Ouri Wolfson and Channah Naiman and Naphtali D. Rishe and R. Michael Tanner. A Feasibility Study on Disseminating Spatio-temporal Information via Vehicular Ad-hoc Networks. 2007. in Proc. V2VCOM.
Compares the usage of an inter-vehicular ad hoc 802.11b network to a centralised server model using the cellular network. Simulations using SWAN/STRAW. Uses messages on resource availability that are more valuable if received with a shorter delay. The evaluation therefore focusses on how high the average utility of received messages is. Little detail is given about the centralised network approach, except to say that it is expensive compared to the capital cost of deploying user terminal equipment in vehicles. The authors conclude that a VANET can have an equivalent utility to a cellular network if a high enough density of equipped vehicles is present.

Ratul Mahajan and John Zahorjan and Brian Zill. Understanding WiFi-based Connectivity from Moving Vehicles. 2007. in Proc. ACM IMC, pages 321--326.
Overviews the VanLAN testbed, consisting of two vans and 11 base stations sited on the Microsoft campus in Redmond. The vans act as shuttles over the course of each day, and hence can record information about the BSs that they receive beacons from. The authors have modified the drivers used on the mobile nodes to record RSSI on a per packet basis. Rather than creating data traffic, the evaluation solely concerns beacon reception rates (BRR). The authors find that in terms of BRR, WiFi connections from vehicles do not have the standard entry, production and exit phases. Instead, grey areas are present, defined as periods of 2 seconds or more when no beacons are heard (i.e. at least 20 beacons are lost). These grey areas separate small sessions where beacons can be heard normally, i.e. in one macro-session there are multiple mini sessions and grey areas. The authors do not specify whether the traces take into account the fact that the vans' speeds vary, and hence the length of grey areas in terms of time may not be correct. The authors state that grey areas are hard to predict from recent measurements such as BRR or RSSI. Speed and distance from the BS also do not appear to have an effect. However, grey areas do appear to have a roughly consistent location when examining the probability of a grey area with respect to the average BRR at a particular location. The paper ends with simulations on how well a prediction functions, and the authors conclude that such predictions are more successful in areas of very good or very bad coverage.

Stephan Eichler. Performance Evaluation of the IEEE 802.11p WAVE Communication Standard. 2007. in Proc. IEEE WiVeC.
Simulation using OMNET of 802.11p. Briefly overviews the 802.11p standard with its control, service and safety of life channels (seven channels of 10 MHz in total). QoS is realised using Access Classes, AC0 to AC3, the latter having the highest priority by using the lowest contention window and AIFS. The simulation results show that with large numbers of nodes in a dense scenario traffic shaping algorithms that map messages to appropriate ACs are likely to be needed if the level of collisions is not to be too high. This is particularly significant for safety of life messages that, according to these simulations, would not be delivered to a suitable level of success.

David Hadaller and Srinivasan Keshav and Tim Brecht and Shubham Agarwal. Vehicular Opportunistic Communication Under the Microscope. 2007. in Proc. ACM MobiSys, pages 206--219.
The authors propose that the reason for lacklustre performance in vehicle-infrastructure connectivity is the lack of environmental awareness of the vehicular environment. The paper details experiments using iperf and a roadside AP to evaluate the performance of 802.11g. Various flaws in protocols at different layers are pointed out that reduce overall throughput. These are lengthy AP selection due to scan times, MAC management timeouts (where the Madwifi driver has a hard-coded timeout of 5 seconds if a MAC association frame is lost), application initialisation delay (minimal), overestimation of initial (and final) MAC bit rates, early TCP timeouts in the entry and exit phases (a ragged connection causing the packet losses), and a slow adaptation of MAC bit rate at both sender and receiver. The authors use their own MAC bit rate selection algorithm to achieve substantial increases in throughput. This algorithm uses a smaller sample window, takes into account a large fraction of the packets on the air, and reviews its decision every 100 (rather than 1000) ms. The paper also details the efforts made to combat GPS error, and recommends that devices should either be more aware of the vehicular environment, or simply not use the entry and exit phases (for which they may be able to use a simple RSS trigger).

Jakob Eriksson and Hari Balakrishnan and Sam Madden. Cabernet: A Content Delivery Network for Moving Vehicles. 2008. MIT-CSAIL-TR-2008-003, MIT CSAIL.
Describes the design and deployment of a system to optimise short-lived, opportunistic, connections to open WiFi APs that are encoutered by vehicles. The system was deployed on 10 taxis operating in the Boston area. The authors implement QuickWifi, which uses short timeouts and 5 retries per stage of connection setup. The authors also compute the optimal channel scanning strategy, bearing in mind that there are only 3 non-overlapping WiFi channels. They then describe CTP, which replaces TCP by using a proxy server in the Internet. CTP estimates delays due to the one-hop wireless link and the wired path from the AP to the Internet server. In this way it is able to infer whether congestion on the wired link is causing packet loss (in which case the transport protocol should back-off), or whether it is packet loss on the wireless link. In addition, send rate selection is performed using probe packets to the AP, to ascertain a reasonable window size value.

Alex Pentland and Richard Fletcher and Amir Hasson. DakNet: Rethinking Connectivity in Developing Nations. 2004. IEEE Computer, 37(1):78--83.
Describes the DakNet project, where wireless ad hoc networking is used on buses in India, and motorbikes/ox carts in Cambodia. Data is physically carried by these vehicles between villages and cities, the idea being that kiosks in the villages can use the onboard wireless access point in the bus to upload and download data from the Internet asynchronously. This saves villagers long and expensive trips into the city in order to retrieve information. Similarly, schools can be connected to the Internet for the first time, without the expense of a wireline link to each one. The system makes use of off-the-shelf IEEE 802.11 hardware, in conjunction with an amplifier and high gain antennas.

A. Seth and D. Kroeker and M. Zaharia and S. Guo and S. Keshav. Low-cost communication for rural internet kiosks using mechanical backhaul. 2006. in Proc. ACM MobiCom, pages 334--345.
Describes the KioskNet project, to connect village kiosks using mechanical backhaul (APs deployed on vehicles) to Internet connectivity available in cities. Kiosks use cheap, recycled computing equipment, which can boot from an image on the AP. This paper overviews the naming, addressing, fowarding, etc. considerations that must be made in this type of DTN, as well as using a scheme known as hierarchical identity-based cryptography in order to secure the communications.

Jun Luo and Jean-Pierre Hubaux. A Survey of Inter-Vehicle Communication. 2004. IC/2004/24, School of Computer and Communication Sciences, EPFL, Switzerland.
Overviews the various vehicle communication projects carried out up to 2004, examines the possibilities of using IEEE 802.11 and UMTS for vehicular communication, and then overviews security in IVC (basically none), and simulation frameworks for testing IVC.

Srdjan vCapkun and Jean-Pierre Hubaux. Secure Positioning in Wireless Networks. 2006. IEEE Journal on Selected Areas in Communications, 24(2):221--232.
Details the security problems with current RF location systems, and suggests a new approach: verifiable multilateration. In this approach, the reporting node is assumed to have a non-clonable identity. It can calculate its position based on RF signals from at least 3 base stations. These base stations can then verify the calculation based on the fact that (acting together) they are able to estimate the position of the node by means of the time of flight of its messages to them. Note that distance reduction (node to base station) is not possible, but distance increase is (just delay the replies). The verifiers calculate the bounds on the distance of the node from each of them, and then compare these bounds to the position reported by the node. If they match to within a certain tolerance, the position is checked to see that it is within one of the triangles formed by the base stations that are acting as verifiers (hopefully more than 3). If both of these tests are passed then the position is deemed valid.

Maxim Raya and Jean-Pierre Hubaux. Securing vehicular ad hoc networks. 2007. Journal of Computer Security, 15(1):39--68.
Overview of the security issues faced in VANETs. Covers simple attacks such as spoofing, and moves on to consider hidden vehicles (where a vehicle is dissuaded from transmitting because another vehicle notifies it that it is in a better position to transmit the warning messages), tunnel (where false data can be injected when vehicles do not have verifiable positions due to loss of GPS [or similar] coverage), wormhole attacks (where two attackers pass messages from one location to another over a high speed link and replay them there), and bush telegraph attacks, (where small errors are added to messages as they are passed over multiple [malicious] hops, such that each error is within a particular tolerance, but overall erroneous data is introduced). The authors go on to examine how, in the light of these issues, VANETs should be secured.

Lin Cheng and Benjamin E. Henty and Daniel D. Stancil and Fan Bai and Priyantha Mudalige. Properties and Applications of the Suburban Vehicle-to-Vehicle Propagation Channel at 5.9 GHz. 2007. in Proc. ICEAA, pages 121--124.
Describes experiments involving two vehicles driving on suburban streets, where GPS positions, in conjuction with RF measurement equipment, are used to evaluate how the doppler spread of 5.9 GHz transmissions is affected by the relative speed and separation distance of the vehicles. Lower speeds resulted in lower doppler shifts, as did lower vehicle separation distances. The authors describe the expected model and compare it to the actual results, hypothesising that the discrepancy is due to fewer non-isotropically oriented scatterers than assumed in the model.

J. Zhao and G. Cao. VADD: Vehicle-Assisted Data Delivery in Vehicular Ad Hoc Networks. 2006. in Proc. IEEE INFOCOM, pages 1--12.
Presents a routing protocol for vehicular ad hoc networks. Every vehicle is assumed to have a GPS receiver and a digital map that also contains real-time information concerning traffic congestion. The protocol is based on the carry-forward paradigm, where messages are forwarded to well chosen vehicles in order to reach their destination. In ad hoc networks, the next-hop can be chosen by (e.g.) the neighbour closest to the destination (as in GPSR). In vehicular networks, account should be taken of the road topology and traffic speeds: vehicles travel slower than wireless signals (delays within forwarding nodes are not accounted for here). A stochastic model to decide which road a packet should take at each intersection (calculated when it reaches it) is presented. Different approaches to choosing the next hop are simulated in ns-2, and compared to GPSR and epidemic routing. The results suggest that VADD performs better than a routing protocol that is not vehicular-movement-aware.

R. H. Frenkiel and B.R. Badrinath and J. Borres and R. D. Yates. The Infostations challenge: balancing cost and ubiquity indelivering wireless data. 2000. IEEE Personal Communications, 7(2):66--71.
Briefly overviews the concept of Infostations, where islands of high throughput connectivity are used for bulk transfers, rather than congesting ubiquitous networks that were traditionally used for voice communications. The problem of such cellular networks not having adaptive modulation schemes, as well as being high cost, is described as motivation. The Infostations approach leads to various questions, such as the need for predicting what stations a user will come into contact with, whether other low bandwidth networks can be used to transmit requests for pre-caching of large files, and what the fastest method of connecting to an infostation is.

J. Irvine and D. Pesch and D. Robertson and D. Girma. Efficient UMTS Data Service Provision using INFOSTATIONS. 1998. in Proc. IEEE VTC, pages 2119--2123.
Points out that the idea of a ubiquitous, uniform QoS network is inefficient. Poroposes integration of the Infostations concept into the (then) future UMTS network in order that the two could have some of the same controlling infrastructure. Infostation controllers are imagined to be entirely separate from each other, and hence form a fully distributed system.

Uichin Lee and Joon-Sang Park and Eyal Amir and Mario Gerla. FleaNet: A Virtual Market Place on Vehicular Networks (Invited Paper). 2006. in Proc. 3rd Annual International Conference on Mobile and Ubiquitous Systems -- Workshops, pages 1--8.
Proposes an electronic bazaar that runs over a VANET. Users can list items and their relevant geographical areas. These listings are carried over the VANET. Other users can list their requests and their budgets. The system transmits these details and finds matches. The system is simulated in ns-2.

Alok Nandan and Saurabh Tewari and Shirshanka Das and Mario Gerla and Leonard Kleinrock. AdTorrent: Delivering Location Cognizant Advertisements to Car Networks. 2006. in Proc. WONS, pages 203--212.
Proposes replacing physical billboards with APs that transmit advertisements that are location-related. These would be large file-size, multimedia presentations. Users would search for particular products/services over a VANET, and the appropriate advertisement would be pulled towards them, from vehicles on which it is cached. This is in contrast to FleaNet, where the entire queries are distributed (as opposed to gossip concerning what ads a vehicle is caching).

A. Festag and G. Noecker and M. Strassberger and A. Lübke and B. Bochow and M. Torrent-Moreno and S. Schnaufer and R. Eigner and C. Catrinescu and J. Kunisch. 'NoW -- Network on Wheels': Project Objectives, Technology and Achievements. 2008. in Proc. 5th International Workshop on Intelligent Transportation, pages 211--216.
Describes the achievements of the Network-on-Wheels project (successor to FleetNet). Software was created to allow both geocasting and specific-receiver forwarding. This was integrated into real vehicles and tested in situations such as allowing emergency vehicles to pass with priority at junctions. In addition, broadcast algorithms are aware of vehicle density in order to not cause broadcast storms when many vehicles are present.

Aruna Balasubramanian and Ratul Mahajan and Arun Venkataramani and Brian Neil Levine and John Zahorjan. ViFi: Interactive WiFi Connectivity for Moving Vehicles. 2008. in Proc. ACM SIGCOMM.
Points out that taking advantage of multiple base stations in a vehicular environment is useful. Vehicles broadcast beacons that tell surrounding base stations which BS they are primarily connected to (the anchor) and what the reception probabilities are for all BSs that the vehicle can hear, each second. Because each packet is acknowledged over 802.11, surrounding base stations can calculate the reception probabilities at the anchor base station (or, conversely, by the vehicle). They use this probability to ascertain whether they should also broadcast the potentially lost packet. The method is deployed on shuttles around the MSR campus, and simulated using traces from another urban environment. The authors also analyse how well web sessions and VoIP sessions perform when ViFi is compared to other methods, such as BS selection based on BRR, showing that ViFi achieves significantly longer session lengths.


Satellite Positioning

W. Ochieng and K. Sauer. Urban road transport navigation: performance of the global positioning system after selective availability. 2002. Transportation Research Part C, 10:171--187.
Compares the performance of GPS with and without Selective Availability inside central London. The authors conclude that only rarely (38 p.c. of the time) are there enough satellites (4) in range to obtain an accurate position fix, and even more infrequently (15 p.c. of the time) are there sufficient to determine what the innacuracy might be (5). The authors compare the performance of GPS without SA and GPS with SA enabled but also using DGPS corrections, and find the performances to be approximately equal. They then proceed to describe a simulation involving low frequency calibration and positioning signals which can be used in conjunction with GPS to cover for short outages. The principle conclusion is that GPS is insufficiently available at a high enough accuracy for vehicle telematics applications. The authors mention that 97 p.c. of the time there is at least one satellite visible -- it is not clear how much use this would be for positioning.

European Space Agency. The European dependence on US-GPS and the GALILEO initiative. 2002. European Commission.
Describes the reasons for the implementation of a European satellite navigation system. In particular, it stresses over reliance on the aging US GPS system. It also identifies the need for greater accuracy, suitability for safety critical applications, and the need for European companies to obtain market share in the navigation market. Galileo will provide service guarantees, greater timing accuracy, and better search/rescue facilities.

Harvey Appelbe. Taking Charge. 2004. Traffic Technology International, pages 52--56.
Magazine article on the suitability of GPS for congestion charging. The author notes that the usage of ANPR or DSRC does not scale to a national system, where gantries would be needed on every road. A satellite-based system appears the only option, but performance of such systems is not yet high enough. Average OBU accuracy is between 7 and 15 m depending on device quality. For a 99 p.c. confidence level, one must accept 30 to 100 m of error margin. High end models are not generally that much more accurate than consumer units. Availability can drop to only 60 p.c. for some units. Accuracy in urban areas is poor, as is the ability to receive differential correction signals. The author concludes by noting that the requirements for a congestion charging posititioning system have not been set out, and that any OBU to be used should be rigorously tested (as it it likely to use many methods in addition to, e.g., GPS to obtain a position fix).

Bern Gush. The Delicate Art of Tolling. 2004. Tolltrans, pages 52--54.
Gives an overview of satellite-based road charging, in response to an IPPR report. The author considers GNSS systems to not yet be accurate enough. Also mentions the issue of social acceptability: need to make it beneficial for people to use the system: 'only a minority of individuals consider the common good when making transport decisions'

Ronesh Puri and Ahmed El Kaffas and Alagan Anpalagan and Sridhar Krishnan and Bern Grush. Multipath Mitigation of GNSS Carrier Phase Signals for an On-board Unit for Mobility Pricing. 2004. in Proc. CCECE.
Describes a method for filtering the signals from several GNSS satellites to eliminate those giving incorrect position fixes due to multipath effects. Method relies on vehicle being stationary for 7-10 minutes (i.e. parked) to obtain fixes from many satellites. Assumes that both GPS and Galileo satellites will exist in the future, i.e. >20 will be visible. This seems unlikely given that in urban environments frequently <4 can be seen. For road charging, simply interpolates parked locations. Results are inconclusive: one data set gives more localised location, whist another in some cases is worse (or at best neutral).

M. O'Donnell and T. Watson and J. Fisher and S. Simpson and G. Brodin and E. Bryant and D. Walsh. A Study of Galileo Performance -- GPS Interoperability and Discriminators for Urban and Indoor Environments. 2002. in Proc. ION GPS.
Describes the predicted performance of Galileo, on its own and combined with GPS. Even with GPS and differential corrections, the performance in high-rise urban environments is as low as 25m accuracy and an availability of 90 p.c.. With only GPS or Galileo, the service is predicted to be essentially unavailable in high-rise urban environments.

George P. Gerdan and Lucinda J. Coombe and Frank Takac. The Effects of RF Interference, Multipath and Signal Obstruction on the GPS Observables. 1995. SDC95/1, Department of Land Transportation, Royal Melbourne Institute of Technology.
Quotes extensively from two other reports on GPS jamming. Of note is that of the four civil receivers tested, only one showed good performance under the influence of a jamming source on a near-band frequency. Another test showed that jamming has an effect a signficant distance away: with a normal TV station transmitter (5 MW), 13 km away on an open hill top no satellites could be tracked.

ARINC. Navstar GPS Space Segment/Navigation User Interfaces. 2004. IS-GPS-200, Revision D, Navstar Joint Program Office.
Detailed specification of the GPS.

T. A. Spencer and R. A. Walker. Prediction and Analysis of GPS Susceptibility to Multipath and Spoofing Interference for Land and Space Applications. 2003. in Proc. 6th International Symposium on Satellite Navigation Technology Including Mobile Positioning & Location Services. Queensland University of Technology.
Describes accidental interference from sources transmitting at similar frequencies to the GPS L1 and L2 signals. Also details jamming techniques, with wideband jamming being the most simple. More complex techniques are spoofiing, where a signal that appears to be that of a viable satellite is transmitted, and meaconing, where real satellite signals are delayed and then retransmitted. A 1 W jammer can have a significnt envelope, and with several such sources, an area of hundreds of square kilometres could be covered.

Martin C. Libicki. Illuminating Tomorrow's War. 1999. McNair Paper 61, Institute for National Strategic Studies.
Paper on various technologies, but includes the statement that a 1 W transmitter will jam civilian GPS receivers for a radius of 20 km.

Felix Butsch. GPS and GLONASS Radio Interference in Germany. 1999. in Proc. of ION.
Details the interference by amateur radio on the performance of GPS near airports in Germany, to determine the safety implications for aircraft relying on the system.

Anonymous. Low Cost and Portable GPS Jammer. 2005. Phrack Inc..
Describes how to manufacture a GPS jammer. This requires fairly cheap parts, and access to RF test equipment. Certainly feasible for an RF enthusiast, and easily manufactured and sold on the black market.

Markus G. Kuhn. An Asymmetric Security Mechanism for Navigation Signals. 2004. in Proc. Information Hiding: 6th International Workshop.

Saab, S.S. and Kassas, Z.M. Power Matching Approach for GPS Coverage Extension. 2006. IEEE Transactions on Intelligent Transportation Systems, 7(2):156--166.
Describes a method and results for improving the accuracy of GPS positioning in urban environments. Normal methods utilise dead reckoning, but the authors choose to build up a database of RSSI values for each space vehicle at many different locations. Later, when the RSSI values are for whatever reason not high enough, they can utilise the database to ascertain what location best fits the RSSIs the receiver is experiencing. This appears to require a great deal of surveying to be carried out, in contrast to already existing methods utilising dead reckoning and map matching. The authors point out that dead reckoning does not work well on long straight roads, as errors build up.

Ramjee Prasad and Marina Ruggieri. Applied Satellite Navigation using GPS, GALILEO, and Augmentation Systems. 2005. Artech House, London, UK.
States that accuracy of a modern GPS receiver is usually modelled as a bivariate normal distribution, with a standard deviation of 4.25 m, resulting in approximately 95 p.c. readings from the receiver falling within 8.5m of its true position.

John A. Volpe Transportation Systems Center. Vulnerability Assessment of the Transportation Infrastructure Relying on the Global Positioning System. 2001. John A. Volpe Transportation Systems Center.
Detailed report describing GPS vulnerabilities, including a good analysis of the jamming possibilities. A 1W jammer will disrupt locked on receivers for 10 km, prevent acquisition for 85 km. A GPS-like jamming signal could disrupt performance for 620 miles.

C. Basnayake and O. Mezentsev and G. Lachapelle and M.E. Cannon. An HSGPS, inertial and map-matching integrated portable vehicular navigation system for uninterrupted real-time vehicular navigation. 2005. Int. J. Vehicle Information and Communication Systems, 1(1/2):131--151.
HSGPS accuracy is better than normal GPS, but still not ideal in urban canyons. The authors show that on a typical drive round the city of Calgary, 53 p.c. of their position fixes had errors of 5 metres or more. They therefore use low cost inertial sensors to sense when the vehicle turns through 90 degrees, or when it is stopped. The latter is important as HSGPS has a tendency to accummulate errors when the vehicle has been moving but then stops. By integrating these sensors and using a Maximum A Priori (MAP) estimator for map matching, they were able to obtain 93 p.c. of their readings with below 5 metres of error.

Sotiris Brakatsoulas and Dieter Pfoser and Randall Salas and Carola Wenk. On map-matching vehicle tracking data. 2005. in Proc. VLDB, pages 853--864.
Describes map matching algorithms for use with GPS positioning devices in vehicles. The standard per-point method is to match on the basis of line segment distance (i.e. proximity to a line, or, if its projection is on the imaginary extension of the line segment, to its start or end point). More advanced methods utilise the trajectory, such as the angle of travel compared to the bearing of the road. Incremental techniques can be used, where the road to be matched is based on the most likely one after a number of points. This can be extended to global matching, fo r a large number of points in an entire journey, where the Fr'echet distance (the minimum distance between two curves such that, if a person were to travel along each with independent speeds, and the curves have coincident start and end points, the two people were joined by a rope all the time) is used to evaluate globally how well the algorithm has performed. The authors evaluate using the Fr'echet distance and the average Fr'echet distance using real traces from Greece.

John A. Klobuchar. Ionospheric Effects on GPS. 1991. GPS World.
The ionosphere consists of a layer of ions and electrons at a height of 50 to 1000 km above the earth's surface. These charged particles affect the speed of propagation of radio waves. The greater the electron density, the greater the speed. Hence, because ultraviolet light causes greater ionisation, the electron density is greater during daylight hours, and during the summer. It is particularly bad in the equatorial regions. There are also solar cycles (11 years) that result in varitions in Total Electron Content (TEC). Dual frequency GPS receivers can correct for ionospheric variations by knowing that the speed-up due to the electron content is proportional to frequency, and yet the theoretical straight line propagation time should be equal for the two frequencies. Single frequency receivers can correct against ionospheric errors by using correction factors transmitted by the GPS satellites themselves. Irregularities in the inosphere can cause diffraction and refraction (scintillation) effects that cause signal fading can cause receivers to lose lock entirely. Scintillation effects are present when transitioning from day to night, i.e. between sunset and midnight (approx.). Magnetic storms also affect GPS receivers, and are caused by solar flares giving off charged particles that move along the magnetic field lines leading to the poles. These result in a highly varying TEC at the poles.

Kipp Jones and Ling Liu and Farshid Alizadeh-Shabdiz. Improving Wireless Positioning with Look-ahead Map Matching. 2007. in Proc. MobiQuitous.
Describes a look-ahead map matching algorithm, which considers how well the GPS tracks from a journey fit on a set of candidate roads. It also takes into account how similar the distances between the original GPS points and the resulting snapped points are. The evaluation of the improvement in accuracy is somewhat unclear, as the map-matched positions appear to be compared to those produced by a wireless positioning system, whose accuracy is not detailed.


Cellular Positioning

Y. Zhao. Mobile phone location determination and its impact on intelligent transportation systems. 2000. IEEE Transactions on Intelligent Transportation Systems, 1(1):55--64.

Christopher Drane and Malcolm Macnaughtan and Craig Scott. Positioning GSM Telephones. 1998. IEEE Communications Magazine, 36(4):46--59.
Describes the various techniques available to locate mobile 'phones. The simplest is propagation time, where time of flight to two base stations can be used to simply intersect two or more coverage circles to obtain a location. Then there is Time Difference Of Arrival (TDOA), where for each pair of base stations a handset can hear, it calculates the difference in the time at which signals from those two base stations arrive. This gives a hyperbola on which the handset must be on between the two BSs. Intersecting two or more hyperbolae gives the location. If more sophisticated receiver equipment is available, Angle Of Arrival (AOA) can be used, where a line can be drawn along the angle measured from each base station. Where these two or more lines intersect is the location. Finally, there is the option of using the carrier phase, to obtain a range more accurate than the carrier wavelength; unfortunately there is no way of knowing the integer number of wavelengths between the receiver and the transmitter, and maintaining a lock on the carrier signal is much harder in GSM than in GPS (another carrier phase location system). The paper also overviews GSM and how the various location systems might be deployed.

Peter Duffett-Smith and Paul Hansen. Precise Time Transfer in a Mobile Radio Terminal. 2003. Cambridge Positioning Systems Limited.
Describes the Matrix algorithm for mobile 'phone positioning. This is based on the idea that if the base station timings for multiple handsets can be obtained, then the location of a target handset can be determined by solving a set of simultaneous equations. This is of use in urban areas where time of flight is unlikely to be directly proportional to distance due to NLOS transmission (multipath etc.). The system can function without multiple handets by instead using historical readings from the past few hours instead of live readings when the position fix is demanded. Accuracy ranges between 50 and 100 m (depending on whether this is indoors or outdoors) with GSM systems, and is expected to improve significantly with W-CDMA systems. The matrix base station timing offsets can also be used to infer an accurate clock, which can then be used to improve the acquisition time of GPS satellite signals.

Suvi Ahonen and Pekka Eskelinen. Mobile Terminal Location for UMTS. 2003. in IEEE AESS Magazine.
Describes the performance of the Time Difference Of Arrival (TDOA) algorithm under UMTS for positioning handsets. In an urban environment this is 215 metres within the 67th percentile, but the authors present a Database Corretion Method that decreases this error to 20 metres. The DCM method relies on measurements of a particular parameter having been previously made at several points in the region where the device is, i.e. this method will not work for areas that have a very low rate of devices traversing through them.

Kashif Raja. Critical Analysis and Modelling of Location Finding Services. 2004. Master's thesis, Justfone plc and Napier University.
Describes in detail the various location techniques for mobile telephones, and their relative merits.

R. Owen and L. Lopes. Experimental analysis of the use of angle of arrival at an adaptive antenna array for location estimation. 1998. in Proc. IEEE PIMRC, pages 602--611.
This paper presents results of an adaptive antenna measurement platform for DCS1800 frequency division duplex operation. A description of the measurement site in central London, UK is followed by a brief note on the data available and used for post-processing in this paper. Frequency division duplex operation is analysed in terms of the estimated peak-energy spectral energy density `look-direction' in the uplink and downlink channels and compared to GPS estimated positions of a mobile transmitter. Singular point measurements from two specific locations in central London are analysed in terms of delay, Doppler and angular spread. It has been found that the speed of the mobile to duration of the measurement period ratio determine not only the fading statistics but the correlation in the uplink and downlink AOA estimates. Statistical analysis of the angular accuracy of the uplink direction estimate to the GPS estimated angular estimate shows a standard deviation of 35 degrees. It is possible to conclude that DOA based location estimators in a highly scattered environment and using four antenna element panels will be inaccurate to such an extent that DOA based location estimation is probably not viable for E911 requirements in an European context. This location estimate at the 67 p.c. percentile is within an angle error, with respect to the `true' GPS estimated angle of arrival, of ±12 degrees for high mobile speed/measurement period duration ratios and ±30 degrees for all mobile speeds.

Dogan Kesdogan and Hannes Federrath and Anja Jerichow and Andreas Pfitzmann. Location Management Strategies increasing Privacy in Mobile Communication Systems. 1996. Information systems security: facing the information society of the 21st century, pages 39-48.

Laitinen, H. and Juurakko, S. and Lahti, T. and Korhonen, R. and Lahteenmaki, J. Experimental Evaluation of Location Methods Based on Signal-Strength Measurements. 2007. IEEE Transactions on Vehicular Technology, 56(1):287--296.
Evaluates how RSSI-based location estimation performs at 5 different VHF frequencies. The authors conclude that the propagation model and antenna used influence the results significantly if the distance of the mobile station from the transmitter is significant such that the RSSI varies appreciably over time at a fixed location. Objects in the environemnt do noticeably influence signal propagation, thus making location estimation with simplistic propagation models innaccurate.


Road Pricing

R. Devereux and J. Dawson and M. Dix and G. Hazel and D. Holmes and S. Glaister and S. Joseph and C. Macgowan and B. Nimick and M. Roberts and L. Searles and R. Turner and S. Gooding and S. Hickey and W. Rickett. Feasibility Study of Road Pricing in the UK -- Report. 2004. Department for Transport.
Examines whether road pricing as a national policy, rather than individual city congestion pricing, is feasible. In particular, it notes that the public and local authorities need to be convinced of the benefits of such a scheme, and that it should be compatible with other European schemes (there is an EC directive on charging schemes). Setting the prices correctly is key, as we need to prevent social exclusion, whilst discouraging congestion. It also notes that the positioning technology required for national road charging is not yet accurate or mature enough to be used, but is likely to be in the next decade.

DfT. The Future of Transport: A Network for 2030. 2004. Department for Transport.
The UK government's transport objectives and plans for the long term. The report covers all types of transport, and in particular notes that road congestion will increase in the medium term, while attempting to 'build our way out of it' is not feasible. Hence public transport must be improved. Meanwhile, walking and cycling should be encouraged, and congestion charging schemes investigated. An annex to the report describes various road charging schemes in place around the world, mostly for HGVs (including a German scheme due to be implemented in 2005 that uses GPS technology to enable per-motorway segment charging).

Transport for London. London Congestion Charging Technology Trials. 2005. TfL.
Describes the phase I technology trials carried out for the London Congestion Charging scheme. Examines the current system of Automatic Number Plate Recognition (ANPR), GPS and other satellite positioning systems, GSM triangulation, and microwave/infrared-based (Dedicated Short Range Communication, DSRC) tags. It concludes that to implement charging as opposed to simply enforcement, the current system is not accurate enough (85 p.c.with a single siting). Positioning technologies (GPS or GSM) are not yet mature or accurate enough and would require a several hundred meter buffer zone bordering the congestion area. DSRC systems are already used for motorway tolling, and performed well in the trials. One constraint is on the height and reach of the poles on which equipment is mounted. Another is the battery life of the tags, which is severely reduced if a vehicle parks near a beacon (reader). It is suggested that DSRC used for charging, in conjunction with ANPR for enforcement would be the best solution for the planned congestion area expansion, with a position based system following nationally in the next decade. The possibility of Value Added Systems if a position based technology was used is also briefly discussed.

K. Funderburg and M. Grant and E. Coe. Changing Insurance One Mile At a Time. 2003. Contingencies, pages 34--38.
High level overview of pay as you drive insurance programmes for private vehicles. Mentions problem of no real proven link between miles driven and accident rate. Suggests that drivers will decreased their car usage based on the fact that more costs are now variable as opposed to fixed.

T. Litman. Implementing Pay-As-You-Drive Vehicle Insurance. 2002. Institute for Public Policy Research.
Overview of the concept of pay as you drive insurance, with economic implications described, and frequently expressed doubts answered. More focus on trusted odometer reading than using GPS and other telematics for remote reporting (considered too expensive).

T. Litman. Distance-Based Vehicle Insurance as a TDM Strategy. 2004. Victoria Transport Policy Institute.
More detailed and up to date overview of pay as you drive. Notes that schemes are likely to effective if they are voluntary, rather than mandatory. Schemes include Kilometer Polis in the Netherlands, GM's On-Star, and Norwich Union's PAYD. GPS based schemes have high implementation (150 USD per car year) costs, whereas schemes were distance or time are recorded and downloaded are more favoured. One scheme also collects acceleration and braking rates. There is still uncertainty as to whether these schemes will decrease claim rates enough to offset the reduced premiums. Governments need to ensure there are no legal difficulties, and provide incentives for companies to offer PAYD schemes. Education is therefore needed on all sides.

J. Fawcett and P. Robinson. Adaptive Routing for Road Traffic. 2000. Computer Graphics & Applications, 20(3):46--53.
Using trafficmaster historical data to plan routes using a central management service, as well as live updates whilst en route, to plan journeys. Accuracy is highly dependent on the density of traffic sensors that are deployed.

J. You and T. Y. Kim. Toward Developing An Expert GIS-Based Travel Time Forecasting Model With Congestion Pattern Analysis. 1998?. University of Illinois at Urbana-Champaign & Seoul National University.
A description of the different methods of analysing historical congestion data and predicting future congestion: historical profile approaches, time-series models, neural networks, nonparametric regression models, traffic simulation models, and dynamic traffic assignment models are briefly overviewed.

E. Guizzo. Network of Traffic Spies Built Into Cars in Atlanta. 2004. IEEE Spectrum Online.
An overview of the Traffic Spies project being conducted at the Georgia Institute of Technology. 500 cars equipped with onboard GPS units connected to the engine management unit. On million trips' worth of data collected. Currently transmitting data by mobile 'phone connections, plans for intervehicular communication.

Applied Generics Ltd. RoDIN24 Road Traffic Monitor (GSM Edition). 2004. Applied Generics Ltd..
Describes the collection of GSM mobile 'phone data to ascertain the number and speed of users on a national road network. The system uses passive probes at base station switching centres to collect Network Management Reports, which contain a handset's current cell (and estimated distance to it), the power levels received from other base stations, and an estimate of channel quality to the base station. The system has been used commercially in Holland with a positive independent evaluation. The key benefits are low cost, high speed of deployment, universal coverage, and no added polling load on the cellular network. This is in contrast to the usage of loop data, which is innacurate when traffic is moving at low speeds, as well as the fact that on average 30 p.c. of loop detectors are out of action at any one time (road works and faults). In addition to congestion information the system can also provide location of emergency calls.

A. May et al. Transport 2050: The Route to Sustainable Wealth Creation. 2004. Royal Academy of Engineering.
A report similar to that from the Department of Transport, that reinforces the view that the true costs of transport be charged for, i.e. road use charging will be unavoidable. It also notes that the spending on different safety technologies between the various modalities of transport are disproportionate to the risk of injury or death. In particular, a person is 8 times more likely to die in a road transport incident than in any other manner, yet the UK's main focus is on rail safety. The report also points out that an increase in the information provided to transport planners and the public will be necessary in order to encourage people to use public transport: this will require significant use of new technology.

David N. Cottingham and J. J. Davies and A. R. Beresford. Congestion-Aware Vehicular Traffic Routing Using WiFi Hotspots. 2005. in Proceedings of Communications Innovation Institute Workshop, pages 4--6. Cambridge-MIT Institute.

A. R. Beresford and R. K. Harle. Location Privacy for Automated Congestion Charging. 2004. Unpublished. (Internal Digital Technology Group Report)

Alastair R. Beresford and Jonathan J. Davies and Robert K. Harle. Privacy-Sensitive Congestion Charging. 2006. in 14th International Workshop on Security Protocols (to appear).
National-scale congestion charging schemes are increasingly viewed as the most viable long-term strategy for controlling congestion and maintaining the viability of the road network. In this paper we challenge the widely held belief that enforceable and economically viable congestion charging schemes require drivers to give up their location privacy to the government. Instead we explore an alternative scheme where privately-owned cars enforce congestion charge payments by using an on-board vehicle unit containing a camera and wireless communications. Our solution prevents centralised tracking of vehicle movements but raises an important issue: should we trust our neighbours with a little personal information in preference to entrusting it all to the government?

Transport for London. Central London Congestion Charging Impacts Monitoring Third Annual Report. 2005. TfL.
Describes the socio-economic impacts of congestion charging since its introduction in February 2003. Broadly, congestion charging has reduced congestion in the area concerned by 30 p.c. on average. This is measured in terms of minutes of delay experienced per km. Other effects appear to be negligible. Bus provision was substantially increaed prior to the introduction of charging, and patronage has, as expected increased. Net revenue from the scheme is 90 million pounds, 80 p.c. of which is spent on the bus network. It is of note that income from the charge and income from penalty charges is almost equal. Repeat offenders are wheelclamped. Enforcement takes place by ANPR being carried out on photographs of vehicles. If the plate exists in the database as paid for, the image is discarded. If it is not, the image is subjected to manual verification, using DVLA vehicle make and model information. This ensures that few false positives are identified. Fleet charging is done in an automated manner, but this is a small subset of vehicles who, if they are incorrectly recognised as not part of a fleet, will be charged in any case. The only risk therefore appears to be that a vehicle not part of a fleet will be mistaken as being so. This may well be excluded by the confidence rating system that is in place for ANPR.

DfT. The Government's Response to the Transport Select Committee's Report, Road Pricing: The Next Steps. 2005. Department for Transport.
This report considers the 21 points raised by the Transport Select Committee and acknowledges them. There are few new proposals of import. It is of note that the TSC considers that road pricing schemes should begin to be trialled now, rather than waiting for suitably priced technology. The M6 toll and expressway proposals are also mentioned, as is the Lorry Road User Charge (concern was expressed that a custom, expensive solution was being procured -- this is no longer the case and is being examined in the context of the implementation of a national road pricing scheme). The TSC also noted that road pricing was an opportunity for the true cost of travel to be passed to consumers. The government (unsurprisingly) is of the opinion (overtly at least) that the cost should not rise.

D. Schrank and T. Lomax. The 2005 Urban Mobility Report. 2005. Texas Transportation Institute.
Evaluates the state of the road network in a large number of major population centres in the United States. Congestion cost the USA $63 bn in 2003, and this figure is rising ($62 bn in 2002). Demand is outstripping supply, and whilst new road building will provide some congestion alleviation, it is noted that in the long term the extra capacity is soon taken. The report recommends that a package of measures be implemented, including increasing efficiency, managing demand, road pricing, the provision of alternative modes of transport, and an increase in road building.

Commission for Integrated Transport. European Best Practice in Delivering Integrated Transport Key Findings. 2001.
Compares the state of the UK's transport network to other EU states' ones. Te UK has some of the most congested roads in the EU, with very high car use. Costs of public transport are very high, partly due to very low government subsidies. Cycling and walking have been in significant decline, and the government must provide the facilities to make such modes of transport safer and generally viable (dedicated offroad cycle paths, etc.). Overall the UK has had a history of chronic underinvestment in transport, resulting in a car-centric society, where people have long commuting times, despite a high population density. The report contains a large number of graphs comparing the state of play in many of the EU countries.

S. Ison. Pricing Road Space: Back to the Future? The Cambridge Experience. 1996. Transport Reviews, 16(2):109--126.
Describes the 1993 trial of a congestion metering scheme in Cambridge, UK. The system involved the installation of a taxi meter-style unit into all cars within a 15 mile radius of the centre of the city. The unit would be switched on when it passed a beacon on entry to the city. If the vehicle stopped more than 4 times in 0.5 of a km, or travelled at 10 km/h for 0.5 km, the unit would begin debitting credit from a smart card personal to the user. The cost would be clearly displayed to the user. On exiting the city, the unit would overhear another beacon, and become dormant. The card would act like an ignition key, such that if it became significantly overdrawn, it would prevent the car from being started. Cars without units, or with non-functioning ones, would be (somehow) deteceted by the roadside beacons. Day trippers would buy a ticket to stick on their windscreen, which would be detected by cameras on the beacons. Charges were not predictable, and were based on how much time was spent in congestion, rather than acting as an immediate deterrent. However, congestion metering can be said to be distance based, and directly related to the amount of congestion, unlike cordon-based schemes. Metering is also easily understood by the user.

G. Lyons and G. Dudley and E. Slater and G. Parkhurst. Evidence-Base Review: Attitudes to Road Pricing. 2004. Department for Transport.
Reviews international literature on attitudes and acceptance of road pricing schemes. Emphasises that the acceptability of such schemes is contingent on the revenue being clearly hypothecated for transport improvements, and not general taxation. Marketing of such schemes is also key, as is the (perception of) equity (fairness) between different groups. For public acceptability, technology must be both efficient and flexible. Privacy is a concern in some countries, but it is decreasing in the EU as people realise that they are capable of being tracked in any case. Demonstrating that the technology is fair and minimises the number of non-payments is also important.

ROCOL. Road Charging Options for London: A Technical Assessment. 1999. Government Office for London.
Evaluation of the different possible technologies for alleviating congestion in London. Paper licenses were considered to require too great an implementation effort, and the rate of evasion would be high. The use of onboard units was projected to take at least 3 years to design and install. 80 p.c. of daily trips in Central London between 07:00 and 19:00 cross the boundary, with a further 15 p.c. of trips being made solely within the area, but by vehicles that had crossed the boundary at some point throughout the day. A workplace parking levy was also considered, but it was noted that the effort would be significant, given that access to private parking for the purposes of road charging is not permitted by current legislation, and that the complexities of shared parking spaces (40 p.c. mixed use across the Greater London Authority area), and which vehicles were employee or customer owned would also present a difficulty.

S. Ison and T. Rye. Implementing Road User Charging: The Lessons Learnt from Hong Kong, Cambridge and Central London. 2005. Transport Reviews, 25(4):451--465.
Compares the Cambridge and Hong Kong schemes, and examines why the Central London scheme was successful, whereas these were not. The need for congestion to be an obvious problem, the requirement for a political champion, for a single administrative body, and for viable public transport to be implemented prior to the scheme, as well as a clear presentation to the public of the scheme's aims and method of operation are all key. Timing is also important; in Hong Kong the scheme was proposed shortly after major road improvements had been completed. Privacy is also a concern, which was solved in the Cambridge trials. Revenue hypothecation for transport improvements is extremely important.

S. Borins. Electronic Road Pricing: An Idea Whose Time May Never Come. 1988. Transportation Research Part A, 22A(1):37--44.
Examines the Hong Kong road pricing scheme, and the reasons for its failure. Three hypotheses are presented: the political agenda was not right due to the uncertainty of the future of the territory, the tactical errors made by the government, or that such a scheme is doomed to fail in an urban democratic society. The author cites the reasons of privacy, the unequal distribution of the costs and benefits (i.e. those who pay most do fund improvements in public transport, rather than tangible improvements for themselves), and the difficulty of implementation (e.g. the deployment of electronic number plates to all vehicles).

S. Morrison. A Survey of Road Pricing. 1986. Transportation Research Part A, 20A(2):87--97.
Surveys the economic theories behind road pricing. The key point to note is that a road pricing scheme should aim to toll users the difference between the average variable cost that they bear (e.g. fuel, tyres...) and the short run marginal cost (i.e. the actual cost that each extra journey makes to society as a whole, assuming a given road capacity [short run]). This then means that users pay the full social cost of journeying. Clearly the social cost varies throughout the day depending on the degree of congestion, and therefore tolls should also vary throughout the day. The paper goes on to describe various economic models, e.g. those incorporating a stochastic demand, or a degree of demand uncertainty, that calculate the optimal tolls for different situations. Distribution of the welfare benefit derived from tolls is difficult: should they be distributed back to the individual users (in which case the rich in essence become richer), or equally to different population groups (in which case groups are equal, but individuals in those groups will suffer). Interestingly, in the Singapore experience this inequity of benefit distribution does not appear to have taken place. The author notes that in Singapore's case, a single level of government, a good public transport system, and a trust in the government all helped the introduction of the scheme. Political and legal issues elsewhere would complicate matters. Whilst road pricing will provide the greatest net benefit, other schemes may be more acceptable whilst providing a lower net benefit (e.g. one way streets, parking restrictions, etc.).

S. Iwanowski and W. Spering and W. Coughlin. Road Traffic Coordination by Electronic Trading. 2003. Transportation Research Part C, 11(5):405--422.
Outlines how vehicles could be fitted with units that allowed the driver to specify a personalised strategy for how the unit should bid for spaces on roads. Each unit enters into an auction of time slots on road segments. The maximum and minimum bids that a unit may make can be specified, as can limits for the bids for each road segment from any vehicle. A central co-ordinator runs the auction and keeps track of each vehicle's balance. Slots can also be exchanged, similar to the stock market. The authors state that simulations have shown that a 30 p.c. penetration rate is required for a significant effect on congestion to be observed. The paper also outlines a road clearance method, whereby vehicles wishing to go faster than those in front of them pay these vehicles to move out of the way. Such dynamic trading systems better reflect the values users place on their time, rather than imposing a uniform valuation. A market-based approach is also predictive and are flexible with varying levels of congestion. The authors note that incentivising the voluntary fitting of vehicles with such a system will not be easy.

P. Blythe. Congestion charging: Technical options for the delivery of future UK policy. 2005. Transportation Research Part A, 39:571--587.
Examines four options for the implementation of congestion charging. Traditional ANPR is briefly described, as is DSRC. The author then goes on to mention wide area location systems based on GPS, and location using 3G cell coverage. It is suggested that enforcement of a wide area scheme simply involves the deployment of camera gantries in locations where they are not aesthetically displeasing. Finally, the article describes the use of MOTES ad hoc networks to perform charging, by deploying low cost nodes so pervasively that their locations can be used by vehicles for charging purposes.

C. Birle. Use of GSM and 3G cellular radio for Electronic Fee Collection. 2004. in Road User Charging Seminar. Vodafone Group R&D. IEE.
Presentation on Vodafone's Mobile Positioning System, combining satellite-based positioning with cell ID data to increase accuracy. Trials show 99.9 p.c. accuracy on highways. This approach seems to dovetail well with charging being an in vehicle mechanism, but it is not clear whether the frequency of the handset registering with the network is high enough for a detailed enforcement log to be kept.

E. Sullivan. Implementing Value Pricing for U.S. Roadways. 2003. European Journal of Transport and Infrastructure Research, vol 3.
Describes the various congestion/toll pricing programmes in the USA. In particular, the creation of extra lanes in the middle of highways that are expressways or HOT/HOV lanes. There appears to be a significant resistant in America to congestion pricing due to the impression that it increases economic inequalities between the poorest and richest in society. The author notes that there has also been difficulty in obtaining approval for allowing single occupant vehicles from using HOT lanes for a charge. Finally, it is noted that there exists no scheme in the USA that uses distance and time based charges in a situation with multiple entries and exits (as opposed to an expressway).

N. Lewis. Road Pricing: Theory & Practice. 1994. Thomas Telford. 2nd edition.
Provides a comprehensive summary of the various congestion charging schemes in operation around the world, both proposed and implemented. Of interest are the Singapore scheme, which has a long history (beginning with paper discs, and now electronic), the failed Hong Kong scheme (presumed to be mainly due to bad political management), the Bergen and Oslo cordon schemes, and the unimplemented Cambridge congestion metering system.

Automobile Association. Motoring Costs for Petrol Cars. 2005. AA UK.
Sets out the fixed and variable costs for petrol cars of different prices.

D. Beevor. Lorry Road User Charge: a debacle in the making. 2004. Transport News Network.
Written from a lorry driver's perspective, this article argues that the lorry road user charge will not decrease HGV congestion at rush hour as drivers have no option but to drive in peak periods, due to working time regulations, and the costs of drivers and vehicles not being on the roads for approximately half the working day. The author also points out that GPS satellite technology is easily disrupted by interfering with antennae, and that other systems additionally use DSRC for checking and enforcement. He concludes by noting that other EU members states' systems are not interoperable, and that a more simple solution would be to harmonise fuel duty throughout the EU.

P. Krebs and U. Balmer. Fair and Efficient: The distance related heavy vehicle fee (HVF) in Switzerland. 2004. DETEC Switzerland.
Describes the Swiss lorry user charge. This consists of an onboard unit (compatible with the Austrian scheme) for domestic vehicles that is connected to the vehicle's tachograph. GPS is used as a backup to ascertain distance travelled, and not as a precision location system.The OBU is switched on and off by DSRC beacons at the entry/exit points of the country. Charges are per km and are varied for different vehicle weights, emissions classes, and time of day. Foreign vehicles may voluntarily install an OBU, or must declare their mileage at entry and exit of the country, and be subject to random checks. The object of the charge was to encourage a shift of freight to rail, rather than only decrease congestion at peak hours.

Matthias H. Rapp and Ueli Balmer. The Swiss Distance Related Heavy Vehicle Fee (LSVA) -- A Novel Approach to Area-Wide Road Charging. 2004. Rapp Trans AG and Swiss Federal Office for Spatial Development.
Describes the Swiss lorry charge in high-level technical language. HGV traffic growth has been countered, with a 5 p.c. drop in HGV traffic occuring after the implmentation of LSVA. Consolidation of lorry firms was seen. Cost of living has not increased. Negligible increase in rail freight as yet, but already one of the highest levels in the world in any case. Collection costs of the fee are 5-7 p.c. of income. Net income is 773 million CHF.

Road Traffic Technology. LKW-MAUT Electronic Toll Collection System for Heavy Goods Vehicles, Germany. 2005. Road Traffic Technology.com.
Describes the lorry user charging system in Germany. Onboard GPS units with map matching units using a digital tachograph as a backup ascertain what motorway segment the vehicle is on. ANPR infrared cameras at 300 locations around the country provide an enforcement mechanism, along with DSRC to confirm that the unit is functioning correctly. DSRC is also used to provide additional location support in some difficult areas. Once the vehicle exits the motorway, the OBD detects this via DSRC from a gantry, and sends the total toll incurred to the central authority (TollCollect) via a GSM link. Manual payment is also possible via the Internet or at service stations, but requires the intended route to be entered.

G. Goodin. Enforcement of Managed Lanes with HOV Preference. 2005. in Proc. 12th International HOV Systems Conference.
Describes the options for HOV lane occupancy checking. In all cases this is a manual visual check. To make it more efficient, the author suggests more gantry mounted technology should be used to ensure that the vehicle's onboard tag is functioning correctly, rather than the occupancy verification officers doing so manually with a handheld device. Verification takes place in a low speed area. The abuse rates for the HOV lanes mentioned are 55 to 65 p.c., and 60 p.c. of court cases against violators were dismissed.

R. Harle and A. Beresford. Keeping Big Brother off the Road. 2005. IEE Review, 51(10):34--37.
Describes a novel scheme for congestion charging that preserves users' privacy.

Seth Stark and Doug McClanahan and Charlie Howard and Elizabeth Robins. Washington Transport Policy Update: Future Visions. 2005. Washington State Department of Transport.
Report on updating Washington State's transport policy. Includes a chapter on intelligent transport systems. Mentions a demonstration trial of distance-based GPS charging. Onboard units are estimated to cost 300 to 400 dollars (excluding software).

M. Lundberg (Ed.). Deliverable 7.2: Practical Implementation Guide for Cities. 2004. PRoGRESS project, Bristol City Council.
Describes the implementation lessons learnt from the European PRoGRESS project on road pricing/congestion charging. The performance of GPS is concluded to not be good enough to be of use in cities. It is also crucial that the public is given a clear explanation as to how such advanced equipment works. ANPR is found to have a non-recognition rate of 7 to 15 p.c.. The report recommends that all price calculation be performed by external infrastructure, as otherwise an onboard unit is too costly, and updating its software becomes problematic.

Satish V Ukkusuri and Ampol Karoonsoontawong and S Travis Waller and Kara M Kockelman. Congestion Pricing Technologies: Synthesis and an Evaluation Framework. 2005. in Proc. 84th Transportation Research Board Annual Meeting.
Describes the ELECTRA IV evaluation framework applied to various aspects of different congestion pricing technologies, and ranks them acccording to the results. This results in RFID being the preferred technology, followed by ANPR, GPS and IR, and then DSRC. The authors note that this analysis is somewhat subjective (for the values used for the enforcement and privacy factors), and is highly dependent on the type of scheme for which the technology is to be applied. They conclude that GPS will become the preferred technology once enforcement and privacy issues are solved.

iPico South Africa. Test Report: Single-lane Vehicle identification with UHF RFID. 2003. Module 1403, iPico Holdings Ltd..
Evaluates various different iPico RFID tags in gantry and pole mounted situations. Various vehicle speeds up to 160 km/h are used. Five tags are placed on the vehicle, which is evidently a more difficult test than would be expected in a real environment. The tag identification rate is 100 p.c. in all situations.

J. David Porter and David S. Kim and Hector A. Vergara. Electronic Road Pricing Systems: Capabilities, and Existing and Needed Technology Solutions. 2004. in Proc. Transport Research Board 83rd Annual Meeting.
Surveys various congestion charging schemes and compares what technologies each uses, and how charging takes place in each. The authors mention the Oregon Road User Fee Task Force system where distance-based data is transferred to a central center from service station fuel dispensers and the fee is computed and collected as part of a fuel purchase.

M. Nevin and L. Abbie. What Price Roads? Practical Issues in the Introduction of Road User Charges in Historic Cities in the UK. 1993. Transport Policy, 1(1):68--73.
Concluded that the main issues with the introduction of congestion charging were related to social policy, and that technology was largely up to the task (assuming current rates of progress).

Graham Pendlebury and Archie Robertson and David Rhind and Tony May. Managing Road Congestion. 2005. The Journal of the Foundation for Science and Technology, 18(9):8--12.
A collection of articles describing current government policy attitudes to technology on the road network. Road pricing is seen as technologically feasible, but the social implications are what is holding current policy back, as well as the lack of standardisation of technology globally. Longer term co-operative driving schemes are envisaged, as well as more efficient usage of the current road network (e.g. using the hard shoulder as an extra lane, and ensuring that this is still safe using rapid detection of incidents and variable message signs to redirect traffic as appropriate.

Peter Samuel. Of Sticker Tags and 5.9 GHz. 2004. ITS International.
Describes current state of deployment of transponders used for tolling/charging. Sticker tags are read-only and cheap. Once stuck to the windscreen they cannot be removed without being destroyed. Easily produced and mass-deployed. Usage of 5.9 GHz spectrum will be in accordance with IEEE 802.11p forthcoming standard. Advantages are higher bandwidths of 27 Mbps (compared to 0.3 to 5 Mbps with current DSRC) and greater ranges (300 to 1000 m, compared to 3 - 30 m).

A. E. Wiggins. A Review of Cashless Electronic Toll Collection Technology. 1994. in Proc. Transport Session, ICAP Expo.
Brief survey of toll collection implementations and projects. Of interest is the definition of an open toll system (i.e. one where the vehicle is detected only on entry to the area, e.g. a bridge), and closed, (i.e. detection is also at the end, such a segment-based toll road). Also, when considering toll plazas, tailgating and frontgating (where a user passes a barrier due to the transponder of a vehicle immediately behind it), and interlane crosstalk are described as problems. The author also notes that a single tag can be used for multiple schemes if it is partitionable.

Bern Grush. Optimizing GNSS-Based Mobility Pricing for Road-use, Parking, and Pay-As-You-Drive Insurance. 2005. in Proc. 4th Europ"aischer Verkehrskongres, pages 1--5. Applied Location Corp..
Attempts to develop a single unit for road pricing, parking charges, and PAYG insurance, using satellite-based location systems. Only store the locations that a vehicle parks in (for a time greater than a certain numberof minutes). Then transmit these to a central processing facility. The user must trust the facility to preserve their privacy. Parking metering is more acceptable than road charging, can be enforced 'cheaply' using mobile ANPR units. Authorities can use anonymised data to ascertain where to send mobile ANPR units. To perform RUC, upload the parking logs, and infer zones driven in by using straight lines between parking points. The author admits that this does not work over long distances between parking points. Also relies on accuracy of GNSS systems.

Per Kaageson and Jos Dings. Counting the Kilometres -- And Paying for Them. 2000. European Federation for Transport and Environment.
Summary leaflet of longer report on how to introduce an EU-wide charging system. GPS distance charging is technologically feasible, but using it for distance charging is not currently legally feasible, whereas digital tachograph recording is. The leaflet also summarises the benefits of an EU-wide system (one similar to that currently used for charging EU air corridors: payment to central authority, who then distributes out to those whose airspace has been used in the correct proportions).

Kenneth A. Small and Jos'e A. G'omez-Ib'a~nez. Road Pricing for Congestion Management: the Transition from Theory to Policy. 1998. Chapter in Road Pricing, Traffic Congestion and the Environment. Chelenham: Edward Edgar.
Overview of many of the salient congestion charging schemes. Good detail on Scandinavian schemes, which were originally meant as revenue generation (and air quality improvement) projects, and not at reducing congestion. Mentions the A1 autoroute in France, where people who previously travelled at off peak times now travel at peak, despite the charge, as there is no longer severe congestion. Also overviews the Stuttgart experiment, where a trial group of 400 users had real charges imposed on them that varied by time of day. Users do change their route on the basis of variable pricing schemes (Hug et. al, 1997). Defines road pricing as a collective noun for congestion pricing, parking charges, and toll roads.

Vehicle Occupancy Ltd. Cyclops Product Brochure. 2005. VOL.
Describes a system using infrared cameras to count the number of occupants in a vehicle. This can be used in toll situations to ensure that vehicles using HOV lanes do in fact have real passengers (and not dummies). The system is currently being tested on the Forth road bridge in Scotland.

Jos'e A. G'omez-Ib'a~nez and Kenneth A. Small. Road pricing for congestion management: a survey of international practice. 1994. NCHRP synthesis 210 of highway practice, National Research Council, Washington D.C..

CityLink Melbourn Ltd. CityLink Made Easy. 2005. Transurban Group.
Describes the Melbourn CityLink toll road. This is 22 km of motorway linking 3 freeways. Fully electronic tolling with DSRC tags is used, with enforcement cameras to catch offenders and also provide an alternative for those not wishing to have tags.

E-ZPass. E-ZPass Individual Guidebook. 2004. E-ZPass.
User manual for the E-ZPass DSRC tolling system, used in various states in the US.

Edward Sullivan. Continuation Study to Evaluate the Impacts of the SR 91 Value-Priced Express Lanes. 2000. Cal Poly State University.
Contains a section introducing the SR-91 Express Lanes. These are two lanes in either direction with no entries/exits for 10 miles. Tolling is by means of a microwave tag. Variable pricing is used to regulate congestion on the link.

K.W. Ogden. Privacy Issues in Electronic Toll Collection. 2001. Transportation Research Part C, 9(2):123--134.
Details the privacy aspects of tolling from a policy (rather than technology) level. The issue of national data protection legislation is covered. Tracking of individual tags, enforcement, access to the information by 3rd parties, and what can be inferred if such data is examined in conjunction with other databases is also examined.

Intelligent Vehicle Trial: Intelligent Transport Systems Privacy Discussion Paper. 1999. Tasmania Department of Infrastructure, Energy and Resources.
Examines the privacy needs of users of toll road systems, and the current legislative environment in Australia and Tasmania in particular.

Eva Schelin. Road Charging in Sweden An Update. 2005. in Proc. IEE Road Transport Symposium, pages 6/1--6/10.
Overview of Swedish transport policy, the Stockholm congestion charging trial (2006, cordon-based, time-varying price of SEK 20 [£4.50] [max.] entry up to a maximum of 60 SEK per day, 5.8 GHz DSRC, ANPR cameras for enforcement for evidential reasons [and those who have no tag], free transponder), the toll bridges, and the proposed HGV charging scheme (early stages, would not be gantry-based due to number of remote roads).

Silke Jung. HGV Tolls in Germany: Innovative, Environmentally Friendly and Fair. 2005. in Proc. IEE Road Transport Symposium, pages 5/1--5/7.
Overview of the German HGV scheme. Charging is only carried out on the motorway network. Average toll rate 12.4 cents/km. Motorway split into segments, charge per segment, so less accurate positioning is required. GSM link back to Toll Collect allows OBU to transmit list of traversed segments for charging. Revenue is hypothecated for transport improvements. 3 billion Euros revenue in 2005. Enforcement using ANPR and DSRC for querying OBUs.

Jos Boot and Pieter Boot and Erik T. Verhoef. The Long Road Towards the Implementation of Road Pricing: the Dutch Experience. 1999. in Proc. ECMT/OECD Workshop on Managing Car Use for Sustainable Urban Travel.
Details the 4 different demand management schemes that have been proposed in the Netherlands, as well as the social and legal barriers to them. In 1991 a sytem of toll plazas around the Randstad was proposed, whose main objective was fund raising. These were rejected on the grounds of complexity and space requirements. In 1993 a system for rush hour traffic permits was proposed, by which a flat fee would be paid per year for the privilege of being able to drive at rush hour. This would have had the disadvantage that there would be no restriction to keep people out of rush hour traffic. In 1994 rekeningrijden was proposed, where electronic toll cordons would be set up around Amsterdam, Rotterdam, Utrecht and Den Haag. This was narrowed to a trial of just two cities after public resistance. This was shelved in 2001. A new proposal known as Mobimiles is for a kilometre-based charging scheme instead of fuel tax. Good reaons for this are that simply increasing fuel tax will result in a greater number of trips abroad for fuel, and will cause people to buy more fuel efficient cars, rather than decrease congestion. The authors believe that GPS-based charging is technically feasible. They quote work that compares the effects of increasing fuel duty to congestion charging, and note that the effect on car use is much greater with the latter. Finally, the account for why the congestion charging schemes were rejected by the Dutch public, and the reasons given by the Dutch Automobile Association (ANWB).

Roel Pieper. MobiMiles Bewust op Weg (Conscious on the Road). 2001. Minister van Verkeer en Waterstaat, The Netherlands. ((In Dutch))
Examines whether a per-kilometre road use charge would be feasible in The Netherlands. The report states that the scheme should cause costs of journeys to vary and thus differentiate them to the user. It should also have stringent privacy mechanisms in place, and be reliable.The system should be made attractive from the outset to the users, and part of this is correctly naming it (hence the name MobiMiles rather than distance-based charging. The system should be based on a robust architecture, allowing up to 10 million users, and must be compatible with the European directive on interoperability of charging systems. The author examines each of the technology elements required, and concludes that present technology is good enough (including for location systems, though for using a GSM-based system base stations will need to be upgraded). He notes that finding which road a vehicle is on is technically feasible, but has privacy issues. The author also notes the need for a mechanism to provide charging policies to the user, either by smart card of cellular network. Mechanisms for billing and fraud detection are also examined and he concludes these exist. Overall, the technology exists, but ensuring that the solution is socially acceptable and politically palatable is the main problem.

Odd I. Larsen and Knut Ostomoe. The Experience of Urban Toll Cordons in Norway Lessons for the Future. 2001. Journal of Transport Economics and Policy, 35(3):457--471.
Gives an overview of the Norwegian toll rings in Bergen, Oslo, and Trondheim. These were originally set up as revenue generation measures, not for demand management. Revenue was hypothecated (by law under the Road Act) to road transport improvements, and was used to fund an under-city tunnel in Oslo. More recently the Act has been updated to allow road pricing to be used, with revenue destination being widened to the transportation sector. The political feasibility is examined of road pricing, and a set of tariffs are proposed. Lessons learned include the financial feasibility of tolls, the necessity of all major political parties committing to the idea, and that after a period of introduction, tolls become part of everyday life. The authors note that it will be tempting for the Oslo City Council to not make the introduction of road pricing gradual, so as to gain more revenue quickly.

Kristian Waersted. User Based Financing of Urban and Interurban Transport Infrastructure in Norway. Practical Experiences and Future Challenges. 2000. in Proc. 24th International Baltic Road Conference.
Details the Norwegian tolling schemes in Bergen, Oslo, and Trondheim, their background, and how revenue is used. Interurban tolling is now being used withing 200 km of Oslo. Attitudes to the toll rings vary between cities (most probably due to the differences in pricing structures). Trondheim now uses a multi-sector cordon system.

Foo Tuan Seik. An Effective Demand Management Instrument in Urban Transport: the Area Licensing Scheme in Singapore. 1997. Cities, 14(3):155--164.
Describes the ALS scheme in Singapore. A paper certificate was required from June 1975 to enter the Restricted zone at peak times. Such certificates were daily or monthly. Later the scheme's hours and area were extended, and the concept of a part day license (only valid in the inter-peak period) was also created, to spread traffic flows more evenly (previously people had been waiting until the peak hour charge was not in force, and then surged into the area. With the part day license, the charge was only a little less, therefore the rate of ingress was more uniform). In combination with other restriction measures (including the high price of car acquisition), this has meant that congestion has been reduced to a manageable level. In addition, more limited tolling on bypass routes ('Road Pricing Scheme') around the restricted area has also been introduced to decrease congestion there. ALS has now become fully electronic, with cameras used for enforcement.

Chin Kian Keong. Road Pricing Singapore's Experience. 2002. in IMPRINT-EUROPE Seminar.
Describes the evolution of Singapore's road pricing techniques, from the paper licenses to a fully electronic DSRC system. Of note is that when ERP was introduced, it charged users per entry to the Restricted zone, instead of ALS, where charges were levied per day. This decreased traffic due to (e.g.) businessmen using their cars to go to meetings outside the CBD and then returning once more. When ERP was introduced, a 10 month programme of fitting In-vehicle Units (IUs) was conducted, and 98 p.c. of cars were voluntarily fitted for no cost to the users. Payment balance is kept on a smart card that must be inserted into the IU, thus preserving privacy much more than a tag and account-based system. Apublicity programme, including a period where the ERP gantries were switched on but charging a zero price, thus allowing users to become accustomed to the idea.

Toll Collect GmbH. Truck Toll in Germany User Information. 2005. Bundesamt f"ur G"uterverkehr.
User guide for the German HGV toll system. This consists of an onboard unit that utilises GPS signals to ascertain what motorway segment the vehicle is on. In addition, gyroscopes and the vehicle's tachograph are used to calculate an approximate position in the event that a GPS signal is unavailable. Charges are based on a list of charges and road segments stored in the units, vehicle emissions class, and number of axles (all vehicle information is stored in the units). Charges are communicated to a central control centre at frequent intervals, over an encrypted cellular GSM link. For enforcement, an DSRC unit is used with infrared ANPR cameras, which is recorded by around 300 gantry-based readers (10 p.c. of which are in use at any one time), and mobile enforcement units. For those users that do not wish to install an OBU, they must register their route manually at one of the dedicated terminals or over the Internet. An average 500 km trip with a low emissions vehicle with 4 or more axles costs 50 euros. In the rare cases that a GPS signal is unavailable or unreliable (e.g. roads running closely in parallel), DSRC units can be deployed to provide additional location information. Map information and tariffs can be updated on the OBUs by means of the GSM link. Dead reckoning and map matching is used to cope with temporary GPS signal loss.

Pi-Ming Cheng and Max Donath and Xiaobin Ma and Shashi Shekhar and Kenneth Buckeye. Evaluation of Digital Maps for Road User Charging Applications. 2006. in Proc. 85th Transportation Research Board Annual Meeting.
Digital maps generally have a guaranteed maximum error of 20.3 m (1:24,000 or greater) or 12.2 m (under 1:24,000 scale). For roads that are side by side, even with perfect GPS accuracy, the authors calculate a typical maximum permissible error of 4 m. With GPS errors this means that a digital map-based system would give an incorrect result some of the time. The authors examine how accurate several different digital maps are, and conclude that with the best digital maps currently available, and the best GPS receivers on the market (centimetre level accuracy), a system will only succeed in correctly classifying the road that a vehicle is on 73 p.c. to 87 p.c. of the time (depending on the type of road being considered).

Highway Industry Development Organization. Increasing Deployment of ETC and DSRC in Japan. 2002. ITS Review Japan, pages 1--2.
Brief description of Japan's electronic toll collection system. Approximately 1,200 toll gates, implies 90 p.c. of traffic will use the service. OBUs with smart IC are used. This eliminates the need for manual collection facilities, saving on space and costs.

Commission of the European Communities. Directive of the European Parliament and of the Council on the Widespread Introduction and Interoperability of Electronic Road Toll Systems in the Community. 2003. 2003/0081 (COD), European Community.
Explains the need for a common European standard for OBUs for tolling. Various countries are currently using or proposing to use microwave-based systems. The EC recommends that these systems should not be deployed after 2008, and their migration to satellite-based schemes be completed by 2012. The proposal expects that GNSS technology will improve sufficiently that by 2010 a pan-European scheme should be deployed by 2005 for all HGVs, and 2010 for all cars. The OBU cost is expected to fall to between 20 and 50 euros. Any new schemes after 2005 should be composed of GPS/GSM and 5.8 GHz microwave technologies.

Fabio Nusso (Ed.). Deliverable 5.2: Final Implementation Demonstration Plan. 2003. PRoGRESS project, Bristol City Council.
Details the practical implementations of several congestion charging scheme trials in Europe. In particular talks about the Rome scheme, where a Limited Traffic Zone (LTZ) has been set up, with 22 gates. A DSRC pass is used for access. Permits can be for transit or also for parking. A trial involving charging users per entry in the evening peak was also carried out.

A. Tully and P.T. Blythe. Investigating the next generation mobile wireless technology to deliver a mobile pervasive computing environment for road user charging and other ITS services. 2005. in Proc. 1st IEE Automotive Electronics Conference, pages 233--243.
Brief summary of the general groups of technologies for congestion charging; DSRC, GPS, ANPR. Looks forward to the use of ad hoc networks/motes/smart dust for congestion charging, and 4G systems.

Appian Technology. Talon: Description and Technical Specification. 2003. Appian Technology plc.
Manufacturer's documentation on their ANPR system. The claim a 95 p.c. accuracy from a single sighting.

Kaspar Koolstra. Potential benefits of a freeway slot-reservation system: Queuing costs versus scheduling costs. 1999. in Proc. Urban Transport Systems Conference.
Describes the economics of using a slot reservation system on motorways. The costs associated with the extra queuing are compensated by the lessening in congestion. The author notes that in practice this scheme may be more effective than congestion charging, but that technological implementation details (particularly cost) are unknown.

Ross Anderson. On the Security of Digital Tachographs. 1998. Lecture Notes in Computer Science, 1485:111--125.
Describes various frauds and possible solutions for HGV tachographs (digital and mechanical).

J.T. Wong. Basic Concepts for a System for Advance Booking for Highway Use. 1997. Transport Policy, 4(2):109--114.

John Walker. Road Use Charging Seminar: Chairman's Address. 2005. in Proc. IEE Road Transport Symposium, pages 1/1 -- 1/3.
Makes the distinction between road USE charging and road USER charging. In the USA, user charging is the sum of all the costs experienced by the user when they travel (fuel, insurance, maintenance, tax...). Road use charging is a synonym for congestion pricing. Hence, road user charging is road use charging plus taxes and other running costs.

DfT. TSGB Chapter 7: Traffic -- data tables. 2005. Department for Transport.
Tables showing levels of traffic in the UK from 1935 to 2004, in billions of vehicle kilometres.

Jonas Eliasson and Karin Brundell Freij and Kars Hultkranz and Christer Ljungberg and Ulf R"amme. Reference Group Summary: The Stockholm Trial -- January. 2006. Stockholmf"ors"oket.
Describes the initial impacts of the Stockholm congestion charge trial. This is a cordon-based system, using DSRC tags, with 18 charge points. Congestion in the inner city has fallen by 25 p.c.. Public transport usage has increased by 8 p.c. since the previous year (though there has been an upward trend in any case). Queues on arterial routes have all but disappeared. Congestion of boundary routes has not been seen. Little increase in park and ride usage. Journey times at peak periods very much reduced. Pricing is time-variant, with the current price displayed at the toll points.

Gunnar S"oderholm. Facts and Results from the Stockholm Trial. 2006. Stockholmf"ors"oket.
Summarises the effects of the Stockhom trial. Congestion decreased by 22 p.c. during working hours. Ring road traffic increased somewhat (though it has increased year on year since opening). Journey times were reduced by half and a third in the morning and afternoon rush hours respectively. Some modal shift took place to public transport. People did not work from home more, or change the times at which they travelled. Emissions were lowered in the inner city (various statistics in the report). Personal injuries due to accidents reduced by 5 to 10 p.c.. Retail businesses in the area were not affected, and benefited from the reduced congestion.

Thales Group. Electronic Toll Collection for Efficient Traffic Control. 2006. Thales Group.
Overviews European (and some global) deployments of electronic toll collection, primarily using DSRC, and describes how Thales has been involved with them. Notes that there are legal issues about a pan-European system, such ANPR by operators other than the police being illegal.

Vlad Coroama. The Smart Tachograph -- Individual Accounting of Traffic Costs and Its Implications. 2006. in Proc. PERVASIVE, pages 135--152.
Describes an implementation of an electronic tachograph. This paper briefly surveys a few congestion charging schemes. It then mentions the use of GPS-based charging, and a sensor board to augment this location system. There is a user interface for the driver in order for them to ascertain the charge due. The charging method envisaged is a charging plugin downloaded from the authority, running on the unit, that would be present in every car. Privacy considerations are briefly mentioned, with the benefit being seen as no data being necessary to be uploaded to the authorities. Enforcement is not considered in detail.

Majid Sarvi and Ryota Horiguchi and Masao Kuwahara and Yukiharu Shimizu and Akinori Sato and Yasuhiro Sugisaki. A Methodology to Identify Traffic Congestion Using Intelligent Probe Vehicles. 2003. in Proc. 10th ITS World Congress.
Brief report of using probe vehicles to classify traffic congestion. GPS positions are saved on events rather than continuously. The authors classify events as short trips and short stops, long stops and gaps. They also ascertain what data corresponds to a particular trip, by using data about break, indicator and parking lights. They then briefly show how probable each type of event is, given another has taken place. More detail is needed.

Diane Louise Venable and Randy B. Machemehl and Mark A. Euritt. Electronic Toll Collection Systems. 1995. 1322-2, Center for Transportation Research, University of Texas at Austin.
Comprehensive, but dated, report on tolling technologies. A good description of the various systems is given. Technologies such as RF and CCTV are described. No mention is made of GNSS, cellular positioning, ANPR, or distance-based technologies due to the age of the report. Operation (and staff considerations) of toll plazas are detailed, as are implementation issues such as capacities of different schemes. Other issues such as privacy and acceptability are briefly considered from a consumer's perspective, and requirements for ETC systems derived.

Christine Evans-Pughe. Road Watch. 2006. IET Engineering & Technology Magazine.
Describes the UK's network of ANPR cameras. There are now 3000 cameras all connected to a central computer in London. This stores each sighting, along with a brief video of the occupants and an image of the numberplate. The system can handle up to 35 million sightings per day. Data is kept for 2 years, or for longer if special authorisation is given.

Stephen Glaister and Daniel J. Graham. An evaluation of national road user charging in England. 2005. Transportation Research Part A, 39(7--9):632--650.
An economic justification of why national scale road pricing is sensible.

Transport for London. Distance Based Charging: Report on Transport for London's (TfL) GPS OBU Trial. 2006. TfL Congestion Charging (Traffic & Technology).
Describes TfL's trial using a variety of vendor-supplied GPS units in a large-scale test in the city of London. A reference map was constructed using high accuracy GPS surveying. OBUs were carried in various vehicles, and compared to a reference GPS unit. Roads were divided into segments (between intersections), and the GPS fixes used to identify those segments, using standard and vendor supplied map matching algorithms. Average location error was 6.67 metres (all OBUs), best OBU 5.11 metres. Map matching using TfL's highly accurate map was 98.6 p.c. segments correctly identified, 1.4 p.c. not identified (i.e. not charged for), and a further 1 p.c. of segments (that were not in reality travelled on) incorrectly identified as having been traversed. With vendor supplied map data, these values were 96.8, 3.2, and 6.4 (all with the best performing OBU). Billing error was (best case) 0.82 p.c., average 5.7 p.c. over all systems. Of concern (and not highlighted in the report) is that the standard deviation of errors is enormous (Figures 13 and 14). In the very best case, there existed some cost estimates that were over 4 p.c. in error. Others had far greater spreads (up to 10 p.c. in the 'best' case, and 70 p.c. in the 'worst' case.) Sigma 5 integrity is also considered for one OBU, where the number of samples that can be guaranteed to be within a particular radius with a 99.999 p.c. confidence is determined. Even with an integrity level of 100 metres, an entire route cannot be guaranteed to have been correctly recorded to sigma 5 confidence.

Charlie Henderson and Panikos Papagapiou and Adrian Gains and Jim Knox. Driving Crime Down: Denying Criminals the Use of the Road. 2004. United Kingdom Home Office.
Describes and evaluates the Laser 2 project, where 23 police forces in the UK made use of ANPR systems for crime reduction. This involved checking vehicle registration plates with databases from the Police National Computer and the DVLA. Intercept teams could then stop vehicles as required. The systems appear to have worked well, with a large number of arrests (13,499), resulting in an arrest rate per officer of 10 times the national average. Drugs and weapons were also recovered. The report does not give any technical details, but comprehensively evaluates the programme.

Transport for London. Proposed Western Extension to the Central London Congestion Charging Scheme: Report to the Mayor Following Consultation with Stakeholders, Businesses, Other Organisations and the Public. 2005. TfL.
Describes the area the WEZ will cover, (briefly) the technology it will use, the expected effects of the scheme, and the consultation process and its results.

Transport for London. Congestion Charging Technology Trials: Stage 2 Final Report. 2006. TfL.
Describes the trials and results of the technology evaluations carried out by TfL for developing the congestion charging scheme in London. GPS OBUs are evaluated, as is a mini-zone DSRC deployment, and the performance of WiFi (2.4 and 5.7 GHz) for data transmission to OBUs. GNSS performance, even with simulated projections of GPS+Galileo+EGNOS, still does not yield 4 satellites as visible in certain parts of London, even 75 p.c. of the time. With the DSRC trial, there is the issue of matching the ANPR image captured with the tag read (succeeded 99.6 p.c. of the time in the trial). There is also the problem of needing separate poles for the ANPR cameras and the DSRC beacons, due to (much of the time) the lengths of ``free'' road (not obscured by trees etc.) being less than 30 metres. This then means that techniques for spatially correlating where the vehicle whose tag was read at one point is when the ANPR image is captured. For WiFi communication with an OBU, the maximum range in an urban environment appeared to be 100-150 metres. 2.4 GHz performed more reliably than 5.7 GHz. Location-based Value Added Services delivered over cellular technology are also briefly considered.

S.M. Markose and A. Alentorn and D. Koesrindartoto and P. Allen and P. Blythe and S. Grosso. A Smart Market for Passenger Road Transport (SMPRT) Congestion: An Application of Computational Mechanism Design. 2006. in Proc. 1st World Congress on Social Simulation.
Provides the mathematics behind a slot reservation auction system designed as part of the UK's Foresight project. The authors use a VISSIM simulation to find the maximum amount of traffic allowable inside a particular cordon (i.e. the ``cap'' is the ``point at which the total distance travelled in the cordon area by all cars for a fixed time period falls with an increase in traffic''), in Gateshead, UK. They then utilise their agent-based model to calculate how the different income groups of society in the area would be affected by different levels of bid in a Dutch auction for slots in the cordon.

Don Mackinnon. DSRC Charging Application Specification for the United Kingdom: Rquirements for the OBU to RSE Transactions Using 5.8 GHz DSRC to Support Off-Board Accounts. 2006. Department for Transport.
The standard for OBU to roadside equipment communications, including what data shall be available, and what security features the OBU will have. This builds on related DSRC standards from CEN, and security standards such as CARDME, CESARE, and PISTA. It was formerly known as OMISS Volume 3.

Keith Jones and Keith McGhee and Don Mackinnon. DIRECTS Road User Charging Demonstration Project: Results. 2005. in Proc. IEE Road Transport Symposium, pages 2/1--2/8.
Overview of the DIRECTS project. Uses 5.8 GHz DSRC as the principal technology, also testing GPS and GSM positioning. ANPR for enforcement. Large trial in Leeds. Various practical lessons learnt, including metallised (heated) windscreens cause DSRC to perform badly. No results available.

European Parliament & Council. Directive 2004/52/EC Of the European Parliament and of the Council of 29 April 2004 on the Interoperability of Electronic Toll Systems in the Community. 2004. Official Journal of the European Union, 166:124--143.
The European Directive for a European Electronic Toll System.

European Parliament & Council. Directive 2006/38/EC of the European Parliament and of the Council of 17 May 2006 amending Directive 1999/62/EC on the charging of heavy goods vehicles for the use of certain infrastructures. 2006. Official Journal of the European Union, 157:8--23.
European Directive on the Eurovignette.

Edward Calthrop and Step Proost and Kurt van Dender. Parking Policies and Road Pricing. 2000. Urban Studies, 37(1):63--76.
Describes how parking charges influence the price of a cordon charge for a city. Free workplace parking causes congestion in cities, and hence taxing it can be beneficial. Setting both cordon and parking charges in tandem maximises welfare. The authors conclude that, interestingly, parking charges alone achieves nearly 70 p.c. of the maximum utility gain possible, whereas a cordon charge on its own is only 53 p.c. of possible gain. Multiple cordons may work better. The results require cautious interpretation as they are calibrated for a specific geographic area.

Jonas Sundberg. Meter by metre. 2006. ITS International, 12(6):42--43.
Brief details on the conceptual design for a Swedish HGV charging scheme. The scheme will encompass all roads, and utilise an OBU with GNSS positioning. Location/time fixes will be stored on a secure IC card, and then uploaded in non-real-time over a cellular link to the tax authority, who will perform the necessary map matching and billing. A microwave DSRC interface will also be present. Battery powered OBUs are a possibility for foreign vehicles visiting the country.

Karel Feix. Fast Focus. 2006. ITS International, 12(6):35--36.
Brief details on the deployment of the Czech distance-based HGV tolling scheme. This is for 171 gantries on 950 km of motorways by January 2007, and 1198 km of selected first-class roads by July 2007. Deployment of the first phase has taken 9 months. Charging is by DSRC-based OBUs. Issue of installation mainly permissions from private land owners for cabling, though gantry installation on busy roads was also an issue. Expected income is 229-275 million USD.

Wolfgang Beier. Truck Tolling in Germany One Year On. 2006. in Proc. 1st European Roundtable on Intelligent Roads. DaimlerChrysler Services.
German HGV charging system. 35 p.c. of vehicles are registered outside of Germany. In excess of half a million OBUs, 650,000 registered trucks. 250 million transactions (on highway) carried out in 1st year, 2.85 billion euros collected. Less than 0.1 p.c. of transactions disputed. Below 2 p.c. violators (down from 10 p.c.). OBU software allows for toll/tariff data exchange whilst on the move.

DfT. Report on Implementation Feasibility of DfT Road Pricing Policy Scenarios and Proposed Business Architecture. 2004. Section B: Cost Model Report, Department for Transport.
Outlines the costs of road pricing implementation in the UK. Considers the costs for various scenarios. Unfortunately, asserts that a high performance OBU would cost between 100 and 525 pounds, plus 100 pounds installation.

Stephan Hartwig and Michael Buchmann. Empty Seats Traveling. 2007. NRC-TR-2007-003, Nokia Research Center.
Outlines the benefits and difficulties of car pooling using smart phones to tell users where to be picked up and also to charge them for their travel.

David N. Cottingham and Alastair R. Beresford and Robert K. Harle. A Survey of Technologies for the Implementation of National-Scale Road User Charging. 2007. Transport Reviews, 27(4):499--523.
This paper surveys the technologies available for constructing a pervasive, national-scale road pricing system. It defines the different types of road pricing, the methods by which a vehicle's position can be determined, and then examines possible pricing regimes in the context of their technological requirements and implications. The issue of enforcement and the distribution of pricing policies are considered, and further complexities are outlined. An examination of the security aspects of such systems is made, focusing particularly on the need to ensure privacy using technological, rather than solely procedural, methods. The survey concludes that a pervasive, national-scale deployment is unlikely to be technically achievable in the short term.

Carmela Troncoso and George Danezis and Eleni Kosta and Bart Preneel. PriPAYD: Privacy Friendly Pay-As-You-Drive Insurance. 2007. in Proc. ACM WPES.
Surveys current PAYD implementations. Proposes to decrease the privacy impacts of PAYD insurance by performing billing calculations on the in-vehicle black-box. The minimum amount of data required for billing would be transferred to the provider, whilst the entire GPS trace would be encrypted and stored on a USB key. The encryption key would be split into two, one part on the USB device, one part sent to the provider and thence to the user by post. This would prevent any party excepting the user being able to decrypt the trace. However, (in my view), were the provider to be dishonest, the black box could simply be configured to send both halves of the key to the provider. This scheme does not really consider the insurers' reasons for PAYD (data mining, in part), and misses the point that to perform in-vehicle billing, the unit must be much more complex and have up-to-date map and pricing data.

DfT. Transport Trends: 2007 Edition. 2007. UK Department for Transport.
Excellent collection of statistics concerning UK transport, both public and private.

John Brownfield and Andy Graham and Helen Eveleigh and Faber Maunsell and Heather Ward and Sandy Robertson and Richard Allsop. Congestion and Accident Risk. 2003. Department for Transport Road Safety Report Number 44, Centre for Transport Studies, University College London.
Examines the link between congestion and accident rates. On motorways, accident rates are increased with congestion, though the seriousness of injury is generally lessened. In urban and peri-urban areas accident rates decrease with congestion, most probably because drivers in these areas are familiar with the roads that they are traversing.


Ad-Hoc Networks

F. Bennett and D. Clarke and J. Evans and A. Hopper and A. Jones and D. Leask. Piconet -- Embedded Mobile Networking. 1997. IEEE Personal Communications, 4(5):8--15.

S. Corson and J. Macker. Mobile Ad Hoc Networking (MANET): Routing Protocol Performance Issues and Evauluation Considerations. 1999. RFC, 2501, IETF.

G. Xylomenos and G. C. Polyzos and P. M"ahonen and M. Saaranen. TCP Performance Issues over Wireless Links. 2001. IEEE Communications Magazine, 39(4):52--58.

K. Rayner. Mesh Wireless Networking. 2003. IEE Communications Engineer, 1(5):44--47.

C. Moss. The Nodes Revolution. 2004. IEE Communications Engineer, 2(1):18--21.

Laura Marie Feeney. Ad hoc Networking & IEEE 802.11. Unpublished.

E. Royer and C. Toh. A Review of Current Routing Protocols for Ad-Hoc Mobile Wireless Networks. 1999. IEEE Personal Communications, pages 46--55.

C. Chiang and H. Wu and W. Liu and M. Gerla. Routing in Clustered Multihop, Mobile Wireless Networks. 1997. in Proc. IEEE SICON'97, pages 197--211.

S. Murthy and J. J. Garcia-Luna-Aceves. An Efficient Routing Protocol for Wireless Networks. 1996. Mobile Networks and Applications, 1(2):183--197.

C.E. Perkins and E.M. Royer. Ad-hoc On-Demand Distance Vector Routing. 1999. in Proc. 2nd IEEE Wksp. Mobile Comp. Sys. and Apps., pages 90-100.

D. Johnson and D. Maltz and J. Broch. 2001. Chapter in DSR The Dynamic Source Routing Protocol for Multihop Wireless Ad Hoc Networks. Addison-Wesley.

V. D. Park and M. S. Corson. A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks. 1997. in INFOCOM (3), pages 1405--1413.

C. Perkins and P. Bhagwat. Highly Dynamic Destination-Sequenced Distance-Vector (DSDV) Routing for Mobile Computers. 1994. in ACM SIGCOMM'94, pages 234--244.

R. Stefanova and G. Girling and J.L.K. Wa and P. Osborn. The design and implementation of a low power ad hoc protocol stack. 2000. in Proc. IEEE Wireless Communications and Networking Conference, pages 1521--1529.

M. Younis and M. Youssef and K. Arisha. Energy-aware management for cluster-based sensor networks. 2003. Computer Networks, 43(5):649--668.

M. Maleki, K. Dantu and M. Pedram. Power-aware source routing protocol for mobile ad hoc networks. 2002. in Proc. International symposium on Low power electronics and design, pages 72--75. ACM Press.

S. Singh and M. Woo and C.S. Raghavendra. Power aware routing in mobile ad hoc networks. 1998. in Proc. 4th annual ACM/IEEE conference on Mobile computing and networking, pages 181--190. ACM Press.

C.K. Toh. Maximum battery life routing to support ubiquitous mobile computing in wireless ad hoc networks. 2001. IEEE Communications Magazine, 39(6):138--147.

M. Conti and G. Turi and G. Maselli and J. Crowcroft and S. Ostring, P. Michiardi and R. Molva and J. Costa Requena and I. Defilippis and S. Giordan and A. Puiatti. MobileMAN: Architecture, protocols and services, deliverable D5. 2003. IST, 2001-38113, CNR, Cambridge University, Eurecom, HUT, SUPSI.

P. Papadimitratos and Z.J. Haas. Secure Routing for Mobile Ad Hoc Networks. 2002. in Proc. of SCS Communication Networks and Distributed Systems Modeling and Simulation Conference 2002, pages 27--31.

Y. Hu and D. Johnson and A. Perrig. SEAD: Secure Efficient Distance Vector Routing in Mobile Wireless Ad Hoc Networks. 2002. in Fourth IEEE Workshop on Mobile Computing Systems and Applications mboxrm (WMCSA '02), pages 3--13.

S. Capkun and L. Buttyan and J. Hubaux. Self-Organized Public-Key Management for Mobile Ad Hoc Networks. 2002. in Proc. ACM International Workshop on Wireless Security, WiSe 2002..

D. Cavin and Y. Sasson and A. Schiper. On the accuracy of MANET simulators. 2002. in Proc. 2nd ACM international workshop on Principles of mobile computing, pages 38--43. ACM Press.

A. Barroso and U. Roedig and C.J. Sreenan. Maintenance Awareness in Wireless Sensor Networks. 2004. in Short Paper Proc. 1st European Workshop on Wireless Sensor Networks (EWSN).

L. M. Feeney. Energy Efficient Communication in Ad Hoc Wireless Networks. Unpublished.
Unpublished chapter. Good summary of energy concerns in ad hoc networks, along with actual figures for WLAN consumption.

L. M. Feeney and M. Nilsson. Investigating the Energy Consumption of a Wireless Network Interface in an Ad Hoc Networking Environment. 2001. in Proc. IEEE INFOCOM.
Measurements (in detail) of actual IEEE 802.11b interface power consumptions, and factors affecting these. Useful for justifying receive power $simeq$ idle power. Also reinforces the linear relation between energy usage and data sent, but emphasizes the very high fixed cost of keeping the receiver circuit continuously powered in an ad hoc network, that must also be taken into account.

Jinyang Li and John Jannotti and Douglas S. J. De Couto and David R. Karger and Robert Morris. A Scalable Location Service for Geographic Ad Hoc Routing. 2000. in Proc. ACM MobiCom, pages 120--130.
Describes the Grid Location Service (GLS) used to locate nodes in ad hoc networks prior to using geographic-based routing to transmit packets to them. The protocol is somewhat similar to the CAN DHT protocol. The world is partitioned into a hierarchy of squares. A node broadcasts its current location to all other nodes in its unit square. It also picks location servers in all adjacent squares at each level of the hierarchy, and continuously updates them with its current location and speed. To find the location of a node, the requestor uses the destination node's ID. It routes a query to the the node with the lowest ID greater than that of the destination that it is aware of. Routing proceeds in this way, until the query reaches the appropriate location server. Location server selection is by means of hashing the ID of the node to be located. Because of the hierarchy of squares, there are a greater number of location servers close to the node than further away. Location servers further away are updated less often. This results in the average number of hops for a query to be resolved being approximately a fixed overhead above the number required for the actual communication between the requestor and the destination. The protocol is simulated in large networks, with a random waypoint movement model, and random speeds. Radio transmission areas are assumed to be 250 m discs. Simulations are also performed with fractions of nodes failing and re-appearing. Results are generally good, though the system suffers if nodes are not uniformly distributed over the world. It is unclear how the system would fare if the routes nodes could take were more constrained.

Christian Maih"ofer and Reinhold Eberhardt and Elmar Schoch. CGGC: Cached Greedy Geocast. 2004. in Proc. Conference on Wired/Wireless Internet Communication (WWIC), pages 13--24.
Describes an ad hoc routing protocol designed for environments with high speed mobility. The protocol uses a greedy forwarding mechanism to direct packets to their geographical destination. If a destination cannot be reached due to a local maximum, it caches the packet. In contrast GPSR, when it reaches a local maximum, forwards the packet to the host closest to an imaginary line drawn between the sender and the receiver, in a clockwise direction. CGGC keeps a packet in its cache until it receives a beacon packet from a new neighbour, and then forwards it if the neighbour is geographically closer than the node itself. The authors simulate CGGC and GPSR in a square simulation area with a random mobility model. Cached packets are retained for either a certain amount of time or a certain number of packets is reached. For simulations with mobility, CGGC without any caching works at least as well as GPSR, (though performs badly in the case where nodes are stationary). A modest amount of caching increases the delivery rate markedly. The authors do not specify the transmission rate.

Paul F. Tsuchiya. The Landmark Hierarchy: A New Hierarchy for Routing in Very Large Networks. 1988. in Proc. ACM SIGCOMM, pages 35--42.

Bogdan Roman and Frank Stajano and Ian Wassell and David N. Cottingham. Multi-Carrier Burst Contention (MCBC): Scalable Medium Access Control for Wireless Networks. 2008. in Proc. IEEE Wireless Communications & Networking Conference, pages 1667--1672.
With the rapid growth of WLAN capability for mobile devices such as laptops, handhelds, mobile phones and vehicles, we will witness WLANs with very large numbers of active nodes for which very efficient medium access control techniques will be needed to cope with high loads and mobility. We propose a high performance solution based on an innovative node elimination algorithm that uses short and unmodulated bursts of energy during contention -- no data is exchanged. We also present a modified OFDM PHY layer, based on IEEE 802.11a, which allows sensing and bursting on individual subcarriers. We show that the protocol maintains a very low overhead and collision probability which lead to high and virtually constant network throughput at all analyzed network loads, even beyond 500 nodes. The protocol is validated by extensive simulation, comparing it against the IEEE 802.11a and SYN-MAC protocols.


Pricing & Incentives

R. Gibbons. A Primer In Game Theory. 1992. Prentice Hall.
Good introduction to game theory, with interesting applications. Sometimes a little long, but mostly easy to read.

J. Crowcroft and R. Gibbens and F. Kelly and S. "Ostring. Modelling Incentives for Collaboration in Mobile Ad Hoc Networks. 2004. Performance Evaluation, pages 427--439.
Mathematical model of credit based system based on power and bandwidth parameters for ad hoc networks. Provides results from simulation of a small network showing how the credit based system is stable. Overall shows that a credit based incentive/charging system is feasible for an ad hoc network.

P. Michiardi and R. Molva. A Game Theoretical Approach to Evaluate Cooperation Enforcement in Mobile Ad hoc Networks. 2003. in Proc. WiOpt'03.
Short paper. Presents co-operative and non-co-operative game theoretic proofs to provide a minimum bound on the number of co-operating nodes in the network, and to show that a reputation mechanism based on a history of past co-operation can be used to decide routing paths.

A. Urpi and M. Bonucelli and S. Giordano. Modelling cooperation in mobile ad hoc networks: a formal description of selfishness. 2003. in Proc. WiOpt'03.
Uses game theory to prove that a Nash equilibrium of whether nodes are willing to forward each other's packets is for all nodes to be selfish, if each the value each node places on energy is private information. With a suitable discount factor, punishments in later games within the tournament can be used to encourage co-operation. The degree of mobility implies the discount factor, as with high mobility punishment by local nodes must be swift, before the offending node gains a new set of neighbours. This mechanism also works well when nodes send traffic at a high rate: temporary punishment will decrease the utility they gain considerably. One key conclusion is that it is not possible to force a node to forward more traffic than it generates. Another is the observation that punishment mechanisms should allow the possibility of a node becoming re-integrated into the network at a later date, instead of being eternally isolated.

R. Battiti and M. Conti and E. Gregori and M. Sabel. Price-based Congestion-Control in Wi-Fi Hot Spots. 2003. in Proc. WiOpt'03.
Using variable pricing to control the number of users of a Wi-Fi hotspot. The optimal operating point can be proven to correspond to when the average amount of time the channel is idle is equal to the average period of time the channel spends busy when a collision occurs. To avoid sharp fluctuations in user populations, prices are changed differently for different users at random: the authors admit this may not be a good idea in practice, but this does achieve a very high level of link utilisation.

J. Crowcroft and R. Gibbens and F. Kelly and S. Ostring. Modelling incentives for collaboration in mobile ad hoc networks. 2003. in Proc. of WiOpt'03.


Mobile IPv6 & 4G Cellular Networks

R. Chakravorty and J. Cartwright and I. Pratt . Practical Experience with TCP over GPRS. 2002. in Proc. IEEE GLOBECOMM, pages 1678--1682.
A good characterisation of GPRS links. TCP's performance over such links is examined using data from a live test bed, noting the effect of high RTTs on the slow start phase, excess queuing due to large network buffers, and the resulting inflated retransmit timer values. A solution using a transparent proxy is proposed, to clamp congestion window values. This strategy is then shown to be beneficial, in particular for short lived connections.

P. Vidales and L. Patanapongpibul and R. Chakravorty. Ubiquitous Networking in Heterogeneous Environments. 2003. in Proc. 8th International Workshop on Mobile Multimedia Communications.
Router advertisement caching decreases the handover time between base stations, compared to waiting to hear an RA when a handover is actually required.

D. Johnson and C. Perkins and J. Arkko. Mobility Support in IPv6. 2004. RFC 3775, IETF.
Detailed specification for the MIPv6 protocol. Includes overviews as well as exact packet specifications.

S. Thomson and T. Narten. IPv6 Stateless Address Autoconfiguration. 1998. RFC 2462, IETF.
Description of equivalents of DHCP for IPv6. Stateless configuration is by generating an address from the interface address and the local prefix advertised by the local router (over a multicast group). Duplicate address checking is performed on the result.

S. Seshan and H. Balakrishnan and R. Katz. Handoffs in Cellular Wireless Networks: The Daedalus Implementation and Experience. 1996. Kluwer Journal on Wireless Personal Communications.
Handoff latency is decreased by the Home Agent encapsulating packets destined to the mobile node and multicasting them to multiple base stations near the mobile node. The strongest signal BSS transmits to the node, whilst the others buffer the last 12 packets. This does not scale well, and requires a special set of user space software on the BSSs to identify whether they are buffering or transmitting.

Mark Stemm and Randy H. Katz. Vertical handoffs in wireless overlay networks. 1998. Mobile Networks and Applications, 3(4):335--350.
Describes experiments on characterising the handover latency with upward and downward vertical handovers between WaveLAN, Infrared LAN, and a Metricom Ricochet packet wide area network. Various enhancements (fast beaconing, packet doublecasting, header doublecasting) in order to decrease handover latency are proposed and evaluated.

K. Laasonen and M. Raento and H. Toivonen. Adaptive On-Device Location Recognition. 2004. in Proc. PERVASIVE, pages 288--304.
Excellent paper on inferring the context of a user from their GSM cell location. Specifically, lightweight software runs on the 'phone, which then uses several algorithms to group GSM cells into user-named locations, and is able to predict on past experience what location a user is headed for.

C. Rensing and Hasan and M. Karsten and B. Stiller. A Survey on AAA Mechanisms, Protocols, and Architectures and a Policy Based Approach beyond: Ax. 2001. 111, Swiss Federal Institute of Technology.
A long and detailed outline of Authentication, Authorisation and Accounting (AAA) policies. There is a summary of what Ax policies (i.e. AAA policies extended with charging, pricing, billing and auditing) should ideally include, i.e. policy definition points (PDPs) in a network, fed by a policy repository, which then in turn feed policy enforcement points (PEPs) that are located on actual service equipment that users connect to. In my opinion could be heavily edited, but I'm not an expert!

H. Einsleider and R. Aguiar and J. J"ahnert and M. Liebsch and R. Schmitz and P. Pacyna and J. Gozdecki and J. Iganio Moreno and I. Soto. The Moby Dick Project: A Mobile Heterogenous All-IP Architecture. 2001. in Advanced Technologies, Applications and Market Strategies for 3G.
An overview of the Moby Dick project, which aims to provide UMTS-like voice services over multiple types of access network, utilising DiffServ QoS, and IPv6, as well as an AAAC (AAA and Charging) structure for interdomain roaming. The project also focuses on fast handovers between networks.

J. Crowcroft and C. Diot and M. Liberatore and Marcello Pias. Pilgrim -- A Communication Paradigm for Ad-hoc Communications. 2004. Unpublished.
An overview of opportunistic communication in ad hoc networks. Assume that, e.g., users at a conference form small world networks. Use these to transmit data between peers, rather than using the ``structured'' Internet. Data queues up for transmission, when a carrier presents itself, it is sent, with the carrier then passing it opportunistically to the most likely next carrier that will take the data nearer to its destination.

E. Upton and M. Liberatore and J. Scott and B. Levine and J. Crowcroft and C. Diot. HAGGLE: Opportunistic Communication in the Presence of Intermittent Connectivity. 2004. Unpublished.
An overview of opportunistic communication in ad hoc networks. Describes the security issues, in particular the need for communities to provide incentives for users to give up resources to forward others' data. Also describes a reference Java Bluetooth implementation. Nodes communicate with others to tell them what applications they support, and then decide on the basis of digests what data they are willing to carry.

D. Watts and H. Strogatz. Collective Dynamics of Small-World Networks. 1998. Nature, 393:440--442.
An introduction to small-world networks. These are mid-way between regular lattices and networks with completely random connections. For this class of graphs, there is a high clustering factor (i.e. on average many of a vertex's neighbours will have connections between each other), and yet also a short characteristic path length (the average distance between any two nodes in the network). This models real life situations such as the spread of infectious diseases, or the power grid, well.

C. Policroniades and R. Chakravorty and P. Vidales. A Data Repository for Fine-Grained Adaptation in Heterogenous Environments. 2003. in 3rd ACM Workshop on Data Engineering for Wireless and Mobile Access. ACM.
Short paper on creating a file system that decomposes files into their component elements. This then allows a sketch to be shipped over a network to a mobile device, for it to then use a policy to decide what elements to download. These elements can be deliberately degraded (e.g. in the case of images) by middleware on the server side, to decrease bandwidth usage. When migrating to a different type of link, the policy changes, and other less important and more bandwidth intensive elements of the file may be downloaded.

P. Vidales and R. Chakravorty and C. Policroniades. PROTON: A Policy-based Solution for Future 4G devices. 2004. in Proc. IEEE International Workshop on Policies for Distributed Systems and Networks.
An overview of PROTON, a system whereby multiple services can be used, and network usage optimised to those uses, depending on the user's preferences. More of a position paper, as no hard results are presented.

P. Vidales and J. Baliosian and J. Serrat and G. Mapp and F. Stajano and A. Hopper. Autonomic System for Mobility Support in 4G Networks. 2005. IEEE Journal on Selected Areas in Communications.
Describes in detail the PROTON system for using FSTs to model policies which can then take as inputs events triggered by, e.g., speed, position, link quality, QoS, etc. as context. This then allows the mobile device to make decisions based on the policy. Conflict resolution and FST compilation is done inside the network, freeing up resources on the mobile node.

R. Chakravorty and P. Vidales and K. Subramanian and I. Pratt and J. Crowcroft. Performance Issues with Vertical Handovers -- Experiences from GPRS Cellular and WLAN Hot-spots Integration. 2004. in Proc. IEEE PerCom.
Describes the results of vertical handovers of TCP connections using a loosely coupled MIPv6 testbed. Handover times are improved using more frequent Router Advertisements (RA), RA caching, Client assisted Binding Update simulcasting over multiple interfaces, and softhandovers with RA caching.

WAP Forum. Wireless Profiled TCP. 2001. WAP-225-TCP-20010331, WAP Forum.
Briefly describes the main adjustments to be made to the Transmission Control Protocol to optimise it for wireless links (wireless profiled TCP). These are the usage of a (maximum) window size that reflects the Bandwidth Delay Product of the link, the support of window scaling option if window sizes of greater than 64 KB are supported, usage of the Round Trip Time Measurement mechanism (involving time stamps), a large initial window size for slow start, the ability to perform path MTU discovery, an MTU larger than the default IP MTU (with the ability to reduce this to the value specfied in any SYN packets), selective acknowledgements, and Explicit Congestion Notification.

R. Kudwig and A. Gurtov. The Eifel Response Algorithm for TCP. 2005. RFC 4015, IETF.
Describes an algorithm for improving TCP's performance over lossy links (in particular wireless ones). The Eifel response algorithm relies on there being a detection mechanism for spurious timeouts, i.e. ones where the data eventually reached its destination, despite a timer having expired. Using this knowledge, TCP's congestion window and ssthresh can be reset to their previous values. In addition, by using time stamps in ACKs, the algorithm can prevent the go-back-N retransmit strategy that TCP-Reno employs when it hears an original ACK that has been delayed until after a retransmission takes place. There is a security risk that an attacker could convince a sender that any retransmissions it had made due to timeouts were spurious, thereby disabling TCP's congestion control mechanisms entirely. The detection algorithm used should therefore be robust to this type of attack.

R. Koodli. Fast Handovers for Mobile IPv6. 2003. RFC 4068, IETF.
FMIPv6 allows a mobile node to temporarily keep using its old Care of Address (CoA) when it connects to a new access router. The new access router sets up a tunnel to the old access router, allowing packets using the old CoA to be correctly routed. This therefore reduces the handover time, by allowing a new address (and DAD) to be performed whilst using the old address. Also, a mobile node, to update its address with a correspondent node, the new address must be in the CN's binding cache. Until this is the case the CN will reject any packets from the new CoA, so it is beneficial to ensure that the new address is not used until it is valid (and therefore the ability to use the old address is an advantage).

H. Soliman and C. Catelluccia and K. El Malki and L. Bellier. Hierarchical Mobile IPv6 mobility management (HMIPv6). 2003. RFC 4140, IETF.
HMIPv6 involves Mobility Anchor Points (MAPs) which are essentially surrogate Home Agents, in foreign networks. Not every foreign subnet must have a MAP, one may cover multiple subnets. All traffic for the MN comes via the MAP, and therefore if the MN changes address whilst remaining in the same MAP domain, it does not have to update its HA or any CNs. Another point is that the RTT to MAP is assumed to be much lower than that to HA, and therefore the entire binding update procedure is shorter.

H. Schulzrinne and S. Casner and R. Frederick and V. Jacobson. RTP: A Transport Protocol for Real-Time Applications. 2003. RFC 3550, IETF.
Section 6.4.1 contains details of how jitter should be calculated for RTP, which is how iperf implements its jitter calculation for UDP.

A. Cuevas and P. Serrano and C. J. Bernardos and J. I. Moreno and J. Jaehnert and K. Hyung-Woo and J. Zhou and D. Gomes and P. Gonçalves and R. Aguiar. Field Evaluation of a 4G True-IP network. 2004. in IST Mobile Summit.
Describes the MobyDick testbed project. This integrated QoS, AAAC, and inter-technology handovers into one platform using MIPv6. The access technologies were wired ethernet, WLAN, and TD-CDMA. Testbed implementations were set up in Madrid and Stuttgart, and users were asked to experiment with the system to ascertain its performance under load.

David N. Cottingham and P. Vidales. Is Latency the Real Enemy in Next Generation Networks?. 2005. in Proc. ICST CoNWiN.
Positions the idea that apart from the network vertical handover latency effects on the TCP/IP stack, there is another challenge that shadows ubiquitous networking. The TCP-connection adaptation time required when roaming between two disparate wireless technologies can be even longer than the total handover period. Thus, the impact of the adaptation time needs to be minimised and considered when dealing with seamless networking in heterogeneous environments. Presents real data for handover times, and leaves open the question of how TCP should be adjusted to compensate.

C. J. Bernardos and I. Soto and J. I. Moreno and T. Melia and M. Liebsch and R. Schmitz. Mobile Networks Experimental evaluation of a handover optimization solution for multimedia applications in a mobile IPv6 network. 2004. European Transactions on Telecommunications.

C. Bernardos-Cano and I. Soto-Campos and M. Calderón-Pastor and D. von Hugo and E. Riou. NEMO: Network Mobility in IPv6. 2005. Upgrade: The European Journal for the Informatics Professional, 6(2):36--42.
Describes the DAIDALOS project's MIPv6 extensions for supporting Network Mobility. These are similar to the mechanisms used for terminal mobility in MIPv6. Routing optimisation is key as otherwise with nested mobile networks packets can end up being routed through multiple home agents (pinball routing), and not directly between devices that are physically adjacent. MIRON is a DAIDALOS protcol for permfing route optimisation, depending on network conditions; MIRON is used at the Mobile Router for the mobile network, rather than for every connection and CN from the nodes inside the mobile network.

S. Baatz and M. Frank and R. Gopffarth and D. Kassatkine and P. Martini and M. Schetelig and A. Vilavaara. Handoff support for mobility with IP over Bluetooth. 2000. in Proc. IEEE LCN, pages 143. IEEE Computer Society.
Describes BLUEPAC, a prototype IP mobility handoff scheme over Bluetooth. Bluetooth operates with a master transmitting only in even numbered time slots, stating which slave is allowed to use the subsequent odd slot. Packets must therefore be an odd number of slots in length. For handoff, a Bluetooth base station must first act as a slave to the client as it connects, forming a dedicated piconet, then a master to slave switch must occur, once the client has been synchronised into the existing piconet. TCP's performance in handovers between piconets is far from optimal, despite the ARQ provided by Bluetooth link layer. TCP retranmission timers can, after several disconnections, be larger than the disconnection time, thus wasting bandwidth. The disconnection times occur because soft handovers are not realistic with Bluetooth (synchronising with two piconets and switching between them is not practical), and therefore on disconnection from one piconet, inquiry (the scanning to find the address of the other Bluetooth device), and paging (the clock synchronisation for the Frequency Hopping) is required. The work was performed at a time when no Bluetooth equipment was available, so an emulation system was used.

N. A. Fikouras and A. Udugama and C. Görg and W. Zirwas and J. M. Eichinger. Experimental Evaluation of Load Balancing for Mobile Internet Real-Time Communications. 2003. in Proc. 6th International Symposium on Wireless Personal Multimedia Communications (WPMC). Yokosuka-Kanagawa, Japan.

M. Meyer. TCP Performance over GPRS. 1999. in Proc. IEEE WCNC, pages 1242--1252.

S. Aust and N. Fikouras and D. Protel and C. Gorg and C. Pampu. Policy Based Mobile IP Handoff Decision (POLIMAND). 2005. draft-iponair-dna-polimand-02.txt, Work in Progress, IETF.
Proposes that handovers be forced (pro-actively) rather than waiting for a connection to be dropped. Such forcing should depend on a function that aggregates together various quantities, using a matrix of user-specified weights.

A. Gilbert and C. Schaubach. What is PMAC (Policy Management for Autonomic Computing)?. 2005. IBM.

McNair, J. and Zhu, F. Vertical Handoffs in Fourth-Generation Multinetwork Environments. 2004. IEEE Wireless Communications, vol 11.
Good introduction to the field of 4G networks, what vertical handovers are, and what parameters need to be optimised. Mentions the use of utility functions for different possible network connections to inform the decision as to what network connection to handover to.

J. Tourrilhes. On-Demand BlueTooth: Experience integrating BlueTooth in Connection Diversity. 2003. Mobile and Media Systems Laboratory, HP Laboratories.
Adds various modules to BlueZ to allow detection of Bluetooth peers in an automated fashion. Also allows the detection of when peers disappear or connections to them become idle. Uses P-Handoff to allow handover of TCP connections from Bluetooth to 802.11b networks. Adds name resolution services for Bluetooth ad hoc networks by adding an SDP attribute that can be retrieved via the standard SDP querying interface.

Michael Wolf and Michael Scharf. Evaluation of Mobility Management Approaches for IPv6 based Mobile Car Networks. 2003. in Proc. KiVS.
Overview of various technologies for mobility management of networks (i.e. NEMO). Includes prefix scope binding updates (associating a BU with an entire prefix rather than a single address), HMIPv6 (where there exists a Mobility Anchor Point from which another CoA is obtained, as well as the CoA for the mobile node's current local network), Mobile IP (where the Mobile Router for the NEMO uses standard Mobile IP to construct a tunnel to its Home Agent).

Fatema Shaikh and Aboubaker Lasebae and Glenford Mapp. Client-based SBM Layer for Predictive Management of Traffic Flows in Heterogeneous Networks. 2006. in Proc. IEEE ICTTA.
Introduces the concept of Time Before Vertical Handover. Relies on the network being aware of the position and received signal strength of a mobile node, and thus inferring its possible future trajectory, assuming a constant speed. This will then allow the time until the node enters the coverage of a neighbouring network to be predicted, enabling adaptation and prioritisation of the flows the node is transceiving. There are various assumptions made about coverage areas, including unit disk model, free space propagation of radio (signal strength), accurate and frequent positioning, and knowledge ofmovement models. The paper also introduces the concept of the stream bundle management layer for flow prioritisation for vertical handovers (not yet implemented?).

Eva Gustafsson and Annika Jonsson. Always Best Connected. 2003. IEEE Personal Communications, 10(1):49--55.
Introduces the concept of always best connected, as distinct from always connected, and proceeds to give an example of how this might be achieved, and what the AAA requirements are.

Masugi Inoue and Khaled Mahmud and Homare Murakami and Mikio Hasegawa. Novel Out-of-Band Signalling for Seamless Interworking Between Heterogeneous Networks. 2004. IEEE Wireless Communications, 11(2):56--63.
A server-side policy-based handover system, allowing users to specify particular aspects of how handovers should be conducted. The system proposed mentiones the need for comprehensive coverage maps to determine where handovers should be conducted. The authors consider that this is unlikely, and hence as future work propose that methods of the client detecting and connecting to networks be developed.

N. Van den Wijngaert and C. Blondia. A Predictive Low Latency Handover Scheme for Mobile IP. 2005. in Proc. International Conference on Mobile Computing and Ubiquitous Networking.
Using an urban mobility model, the authors use OPNET to simulate how tunnels to potential new foreign agents (FAs) can be set up before handoff from the current FA. This then increases the performance at handover, as the latency overhead of tunnel set up is decreased. The coverage zones of FAs are predicted using two well known radio propagation models.

Leo Patanapongpibul and Glenford Mapp and Andy Hopper. An End-System Approach to Mobility Management for 4G Networks and its Application to Thin-Client Computing. 2006. ACM SIGMOBILE Mobile Computing and Communication Review, 10(3):13--33.
Overviews handoff management in MIPv6 networks, and the experiments performed in the DTG. A client handoff mechanism with TCP enhancements was developed, where if a handover was imminent the last TCP ACK is buffered. On handover, the receiver advertised window is set to zero. After handover, the buffered ACK is sent multiple times to trigger a fast retransmit. This ensures that time is not lost waiting for a time out. The handoff mechanism was tested using continuous and discontinuous handover situations. A version of VNC was used to illustrate the handoff mechanism functioning on ultra-thin clients.

Eranga Perera and Vijay Sivaraman and Aruna Seneviratne. Survey on network mobility support. 2004. SIGMOBILE Mob. Comput. Commun. Rev., 8(2):7--19.
Overviews what NEMO is, and the issues it raises. The key to NEMO is to utilise a Mobile Router (MR) for a collection of nodes in a mobile network, rather than carry out MobileIP handover procedures for all of them. This allows for decreased complexity, particularly if there are fixed nodes within the mobile network (e.g. sensors in an aeroplane) that have too few resources in order to implement mobility protocols. With an MR, handover takes place for the entire network by means of the MR having its own Home Agent, which in turn has all packets from individual mobile node's home agents forwarded to it. Nested PANs within Vehicular Area Networks (VANs) make the problem more complex. Sub-optimal routing due to multiple HAs is an issue with NEMO, as is multihoming with multiple MRs for a single VAN. Related projects are overviewed.

Mike O'Dell. 8+8 - An Alternate Addressing Architecture for IPv6. 1996. draft-odell-8+8-00.txt, IETF.
Describes how IPv6 addresses could be split into two 8 byte portions. The first would be a Routing Goop (RG), and the second the End System Designator (ESD). The idea is to allow private networks to use arbitrarily complex topologies, but these are not exposed to the public Internet. Instead systems can route to the edge of the private leaf networks, and then these can be aware of where the individual end systems are.

Jan Derksen and Robert Jansen and Markku Maijala and Erik Westerberg. HSDPA Performance and Evolution. 2006. Ericsson Review, vol 84.
Overviews the performance of HSDPA using a real deployment. The results show how the throughput varies depending on the RSSI, with the distribution of throughputs being wide when there is poor radio quality. A test where the mobile station handed off between multiple base stations was also performed.

Kostas Pentikousis and Marko Palola and Marko Jurvansuu and Pekka Per"al"a. Active Goodput Measurements from a Public 3G/UMTS Network. 2005. IEEE Communications Letters, 9(9):802--804.
Experimental test using a real operator's network of how WCDMA performance varied depending on the payload size used (4 KB payload implies 25 kb/s goodput, 1024 KB payload implies 3100+ kb/s). The initial goodput value seen was also always lower than the spread of goodput values seen throughput the connection. On wired networks this initial value emphwas within the spread. On WCDMA networks, back-to-back connections experience higher goodputs to the initial one. This can be explained by the fact that the user terminal must acquire access to a dedicated channel (DCH) for TCP transmissions, or else at least activate a Packet Data Protocol (PDP) context. Provided that further connections are initiated before the channel/context times out, they will not suffer this setup time penalty.

Marko Jurvansuu and Jarmo Prokkola and Mikko Hanski and Pekka Per"al"a. HSDPA Performance in Live Networks. 2007. in Proc. IEEE ICC, pages 467--471.
Good investigation into the performance of HSDPA as compared to WCDMA on a real operator's network. Examines goodput vs packet size, delay, and jitter.

Pablo Vidales and Carlos J. Bernardos and Ignacio Soto and David Cottingham and Javier Baliosian and Jon Crowcroft. MIPv6 experimental evaluation using overlay networks. 2007. Computer Networks, 51(10):2892--2915.
The commercial deployment of Mobile IPv6 has been hastened by the concepts of Integrated Wireless Networks and Overlay Networks, which are present in the notion of the forthcoming generation of wireless communications. Individual wireless access networks show limitations that can be overcome through the integration of different technologies into a single unified platform (i.e., 4G systems). This paper summarises practical experiments performed to evaluate the impact of inter-networking (i.e. vertical handovers) on the network and transport layers. Based on our observations, we propose and evaluate a number of inter-technology handover optimisation techniques, e.g., Router Advertisements frequency values, Binding Update simulcasting, Router Advertisements caching, and Soft Handovers. The paper concludes with the description of a policy-based mobility support middleware (PROTON) that hides 4G networking complexities from mobile users, provides informed handover-related decisions, and enables the application of different vertical handover methods and optimisations according to context.

Pablo Rodriguez and Rajiv Chakravorty and Julian Chesterfield and Ian Pratt and Suman Banerjee. MAR: A Commuter Router Infrastructure for the Mobile Internet. 2004. in Proc. ACM MobiSys, pages 217--230.
Describes a system that has multiple radio interfaces (e.g. multiple GPRS interfaces, each with a different provider) that is able to combine the throughputs of each interface by striping traffic over them in order to obtain a better throughput that any single interface (and fewer interruptions to the connection). MAR has been implemented and tested in Cambridge. It can use a MAR proxy server in the Internet to allow striping of single TCP streams over multiple links, or it can operate in standalone mode, with the router acting as a NAT box. MAR is able to detect when an interface should not be used on the basis of the RSSI on that interface. The implementation is evaluated using HTTP connections and streaming media, with good results.

Jon Crowcroft and David Cottingham and Glenford Mapp and Fatema Shaikh. Y-Comm: A Global Architecture for Heterogeneous Networking. 2007. in Proc. 3rd Annual International Wireless Internet Conference (WICON). (Invited paper.)

Glenford Mapp and David Cottingham and Fatema Shaikh and Edson Moreira and Renata Vanni and Wayne Butcher and Aisha El-safty and Jon Crowcroft. An Architectural Framework and Enabling Technologies for Heterogeneous Networking. 2008. Submitted to Journal of IEEE/ACM Transactions on Networking.

Jie Zhange and Henry C. B. Chan and Victor C. M. Leung. A Location-Based Vertical Handoff Decision Algorithm for Heterogeneous Mobile Networks. 2006. in Proc. IEEE GLOBECOM, pages 1--5.
Describes an algorithm for deciding whether to hand off to a broadband wireless network (from a cellular network), by means of examining whether the penalty of the handover is compensated for by the benefit of the higher throughput destination network. The assumption is made that the provider base stations can provide information concerning available (nearby) networks, and that the mobile node can report its position to the base station to obtain a suitable list. A utility function is used to evaluate whether handovers are ``good'' from the user's point of view. Base stations estimate the coverage area of local wireless networks by noting when mobile nodes choose to make vertical handovers. A Markov model is used to model user mobility. The algorithm is compared by simulation to a fuzzy handover scheme (where handover occurs if the WLAN RSS exceeds a threshold depdendent on the speed of the mobile node and the traffic load on the WLAN), and a dwell time scheme (MT hands off if dwell time in the WLAN area is greater than the penalty incurred due to a handover). The proposed scheme has a lower harmful handoff probability and also performs better with increased velocities, than the other two approaches, on an open terrain, with random movement.

Helen J. Wang and Randy H. Katz and Jochen Giese. Policy-Enabled Handoffs Across Heterogeneous Wireless Networks. 1999. in Proc. IEEE WMCSA.
Outlines the need for policies to best utilise the hierarchy of wireless networks available to a mobile device. Notes that charging policies for different networks are important. Network conditions, power consumption, connection duration and connection setup time are also taken into account. Network coverage maps are also cited as being important. These factors are combined by taking a weighted sum of their logarithms. Also mentions the need for a ``consistency period'' (stability period), where the period that a user derives benefit on the new network must be enough to compensate for the cost of handing off to it. The authors consider the possibility that multiple mobile hosts would have the same policy and hence carry out handovers at the same time (handover synchronisation), and hence cause network conditions to fluctuate significantly. The authors solve this by means of randomising the stability time on a per mobile node basis. A policy manager and GUI are presented, that take into account known network conditions in order to make handover decisions.

Mary G. Baker and Xinhua Zhao and Stuart Cheshire and Jonathan Stone. Supporting Mobility in MosquitoNet. 1996. in Proc. USENIX, pages 120-127.
Describes the MoquitoNet approach to IP mobility, where foreign agents are entirely removed from the equation, with packets being tunneled directly to the mobile host's new care of address. The approach is evaluated using a real testbed. The pros and cons of such an approach compared to others are given. The key aim of the project is to support IP mobility whilst only requiring the home agent and the mobile host to be modified. Signalling upper layers with connectivity information is described as future work.

Christos Douligeris and Thanos Vasilakos. Mobile IP Protocols. 2002. in Handbook of Wireless Networks and Mobile Computing, pages 529-552. John Wiley.

Roberto Verdone and Alberto Zanella. Performance of Received Power and Traffic-Driven Handover Algorithms in Urban Cellular Networks. 2002. IEEE Wireless Communications, 9(1):60--71.
Provides an overview of the different paramters used for handover algorithms, then focuses on those that use received signal strength. The authors propose using the gradient in RSS to decide whether to initiate a handover to another base station (i.e. a negative gradient will be taken to mean a handover is necessary, even if the current base station's RSS is adequate). The authors then evaluate their algorithm by simulation in a Manhattan grid scenario.

Nishith D. Tripathi and Jeffrey H. Reed and Hugh F. Van Landingham. Handoff in Cellular Systems. 1998. IEEE Personal Communications, 5(6):26--37.
Broad overview of handoff in cellular systems, including the deployment scenarios of cellular systems that necessitate handovers (including macrocell overlay systems), how handoffs are prioritised (noting that sometimes handoffs are queued waiting for capacity to become available at teh target base station), how handovers relate to other resource management tasks, what types of hanoff protocols there are (network controlled, mobile assisted, mobile controlled), and how handoff mechanisms are evaluated. Does emphnot survey handover protocols in detail.

G. N. Senarath and D. Everitt. Comparison of alternative handoff strategies for micro-cellular mobile communication systems. 1994. in Proc. IEEE VTC, pages 1465--1469.
Evalutes the performance of several different algorithms that use a variable value of hysteresis. The value is set based on the RSS level divided by the threshold below which a call is dropped.

R. Vijayan and J. M. Holtzman. Sensitivity of handoff algorithms to variations in the propagation environment. 1993. in Proc. International Conference on Universal Personal Communications, pages 158--162.
The authors conclude that sensitivity to fading standard deviation is most significant in the performance of handover algorithms.

Edson D. S. Moreira and David N. Cottingham and Jon Crowcroft and Pan Hui and Glenford E. Mapp and Renata M. P. Vanni. Exploiting contextual handover information for versatile services in NGN environments. 2007. in Proc. 2nd IEEE International Conference on Digital Information Management (ICDIM), pages 506--512.
Users in ubiquitous and pervasive computing environments will be much more empowered in ways to access and to control their navigation. Handover, the vital event in which a user changes the attachment point in a next generation network (NGN), is an important occasion and the conditions and environment in which it is executed can offer relevant information for businesses. This paper describes the capabilities of a platform which intends to exploit contextual handover information offering a rich environment that can be used by access and content providers for building innovative context-aware multi-provided services. Based on ontologies, the technique not only eases the building of versatile services but also provides a comprehensive source of information both for enriching user navigation in the network as well as for the improvement of the provider's relationship with their customers.

Glenford Mapp and David N. Cottingham and Fatema Shaikh and Pablo Vidales and Leo Patanapongpibul and Javier Balioisian and Jon Crowcroft. An Architectural Framework for Heterogeneous Networking. 2006. in Proc. International Conference on Wireless Information Networks and Systems (WINSYS).
The growth over the last decade in the use of wireless networking devices has been explosive. Soon many devices will have multiple network interfaces, each with very different characteristics. We believe that a framework that encapsulates the key challenges of heterogeneous networking is required. Like a map clearly helps one to plan a journey, a framework is needed to help us move forward in this unexplored area. The approach taken here is similar to the OSI model in which tightly defined layers are used to specify functionality, allowing a modular approach to the extension of systems and the interchange of their components, whilst providing a model that is more oriented to heterogeneity and mobility.

Antonio de la Oliva and Telemaco Melia and Albert Vidal and Carlos J. Bernardos and Ignacio Soto and Albert Banchs. IEEE 802.21 enabled mobile terminals for optimized WLAN/3G handovers: a case study. 2007. SIGMOBILE Mobile Computer Communications Review, 11(2):29--40.
Describes simulations that evaluate the use of IEEE 802.21 in optimising handovers. The draft standard aims to improve handovers by there being a database of neighbouring networks, and the client reporting the signal strengths of those neighbours to the base station, in order that a ahandover decision can be made.


Network Coverage Mapping

Kansas Applied Remote Sensing. Wireless Network Visualisation Project. 2002. Kansas University.
Mapping the WLAN signal strength in an urban area by collecting values on an ad hoc basis using NetStumbler, then superimposing these onto a satellite photo. Interpolation is then used to approximate the signal strength in areas where samples were not taken. In my opinion this is likely to be unreliable, given that signal strength may differ enormously in areas that were not accessible by vehicle, e.g. around buildings. No papers/reports available.

Theodoros Kamakaris and Jeffrey V. Nickerson. Connectivity Maps: Measurements and Applications. 2005. in Proc. 38th Hawaii International Conference on System Sciences, pages 307--311.
Describes the usage of Netstumbler and a GPS receiver to record WLAN signal strength readings around a campus. These have then been plotted in the form of contour maps. Throughput was also measured and plotted in the same way. No indication of how throughput was measured is given (e.g. TCP or UDP, stationary or mobile). They also briefly outline how coverage maps could be useful for predicting bandwidth on routes, or optimising the route to be taken based upon this. They do not consider how to actually do this in practice.

Jeffrey V. Nickerson. A Concept of Communication Distance And Its Application to Six Situations in Mobile Environments. 2005. IEEE Transactions on Mobile Computing, 4(5):409-419.
Describes the concept of communication distance, which is the sum of the time taken for a message to travel from the transmission point to its destination, plus the time taken for the user to themselves move to the transmission point. Such a calculation requires a knowledge of how much data is to be transferred. The author uses this concept to create a weighted graph of the road network that in its weighting function includes a notion of the amount of data that may be transferred. The function takes the traversal time of an edge and subtracts from it the time it would take the same amount of data that could be transferred over the edge to be transferred over the (higher) throughput of a hotspot. Such an approach minimises the travel time, if it is assumed that the entire (known size) file is to be downloaded, AND that there is a hotspot of the appropriate throughput at the end of the journey. The approach (evidently) relies on an accurate notion of what coverage exists on each road. The author then considers the case of dynamically changing edge weights, and points to using more complex space-time graphs that also incorporate the changing state over time (as well as space).

Bhaskar Krishnamachari and Stephen B. Wicker. Fixed Network Design in Cellular Systems using Local Search Algorithms. 2000. in Proc. IEEE VTC, pages 1632--1638.
Evaluates the use of Genetic Algorithms, Tabu search, Simulated Annealing, Greedy First search, and Random Walk in deciding where to place base station controllers in cellular 'phone networks. Tabu search consists of selecting a number of neighbouring points in the search space as possible moves. There is also a list of tabu moves (that may be made once a particular set of aspiration conditions have been satisfied) that is dynamically updated. Genetic algorithms take each sample as a member of a population, and in each generation produce new points from those samples. The conclusion is that Tabu search and genetic algorithms are the best techniques for BTS placement.

Eric M. Delmelle and Peter A. Rogerson and Mohan R. Akella and Rajan Batta and Alan Blatt and Glenn Wilson. A spatial model of received signal strength indicator values for automated collision notification technology. 2006. Transport Research Part C, 13:432--447.
Describes how a large number of samples of cellular RSSI were interpolated using kriging. This is a technique that involves constructing a variogram, which expresses the spatial interdependance of two points depending on their separation. The authors then examine their model compared to the actual locations of base stations. They then use topographical data to estimate how the altitude and slope of the terrain affects the RSSI, and again compare their model to reality, by examining the residuals (the error between reality and the estimates), and ascertaining whether these are spatially correlated using Moran's I. The greater the spatial correlation, the less accurate the interpolation, as errors should be randomly distributed. The effect of vegatation is also briefly considered.

Simon Byers and Dave Kormann. 802.11b access point mapping. 2003. Communications of the ACM, 46(5):41--46.
Overviews the methods and reasons for WiFi AP mapping. The authors point out the many potential applications, such as analysing the percentage of the population having access to WiFi, or establishing the locations of APs from their SSIDs in conjunction with the telephone directory.

L"uck, Stephan and Scharf, Michael and Jorge, Gil. An Architecture for Acquisition and Provision of Hotspot Coverage Information. 2005. in Proc. 11th European Wireless Conference 2005, pages 287-293. VDE Verlag.
Signal strength data for hotspots is collected using GPS-enabled PDAs. It is then pre-processed to only include those values where there is sufficient connnectivity to be useful, and uploaded to a central server. The server aligns the values with a grid of points, interpolates to fill the grid, blurs and vectorizes the data (essentially forming a contour describing the region of useful coverage). The polygons are simplified using Euclidean distance and slope-based simplification alogrithms. They are then added to the Nexus database. Mobile terminals use the Augmented World Query Language to retrieve information about nearby coverage regions.

Siebert, M. and Schinnenburg, M. and Lott, M. and G"obbels, S. Location Aided Handover Support for Next Generation System Integration. 2004. in Proc. 5th European Wireless Conference, pages 195-202.
Describes how a database of signal strength measurements may be used to ascertain whether a handover should be carried out at a mobile terminal's current location. The location of the MT is fuzzy, and therefore the query mechanism uses signal pattern matching techniques to ascertain where the MT is. No details are provided about the algorithms behind the handover determination or pattern matching.

Siebert, M. and Lott, M. and Schinnenburg, M. and Göbbels, S. Hybrid Information System. 2004. in Proc. IEEE VTC, pages 5.
Broadly similar to above paper.

Henry Larkin. Wireless Signal Strength Topology Maps in Mobile Adhoc Networks. 2004. in Proc. Embedded and Ubiquitous Computing Conference, pages 538--547.
Update to an earlier paper by the same author on using Communication Maps. These are expandable regions of non-overlapping cells. Cells can be sub-divided in order to specify areas with greater levels of detail. Updates to cells are made as new values are reported. In addition, when signals are received, they are assumed to contain the co-ordinates of the originating node. This is then used to form the assumed propagation path (a line) that traverses various cells. The cells' signal loss modifiers are then updated in proportion to the length of the path spent in each cell, such that the resulting loss will produce an RSSI that is equal to the actual value experienced. The author provides a simulation of how a Communication Map is built up.

Stephan L"uck and Christian M. Mueller and Michael Scharf and Robert Fetscher. Algorithms for Hotspot Coverage Estimation Based on Field Strength Measurements. 2007. in Proc. IEEE VTC, pages 1086--1090.
Detailed description of the production of contour (mostly single boundaries rather than hierarchies of contours) maps of wireless coverage. Inverse Distance Weighting is used to interpolate between randomly located sample points, to then produce a raster of points in a grid. These are vectorized and the resulting polygons are simplified using the Euclidian distance algorithm. The simplification scheme is evaluated by simulation, with radio signal strengths produced according to the Walfish-Ikegami model, and graphs are presented of how the polygon accuracy varies with different values of the simplification parameters.

Vishnu Navda and Anand Prabhu Subramanian and Kannan Dhanasekaran and Andreas Timm-Giel and Samir R. Das. MobiSteer: Using Steerable Beam Directional Antenna for Vehicular Network Access. 2007. in Proc. ACM MobiSys.
Optimising WiFi performance using an antenna with the potential for 16 different beam positions. There are two modes: cached and online. In cached mode, an RSSI database is used to provide knowledge of what APs have been encountered along the route previously, and what beam was best used. Optimal beam selection is on the basis of selecting the best one in each short segment. In online mode, the best AP and beam that is currently being heard is selected, and used until a certain number of packet drops are experienced. The two modes are compared to the usage of an omnidirectional antenna in two scenarios: a car park and a dense residential area, using carefully deployed APs, and also the APs that were found by war driving. The database is not particularly processed, in the sense that a representative trajectory's readings are picked as being those that are used for prediction of what beam and AP should be used on a subsequent journey. Variation in WiFi performance with location was not great, as measured over 8 different journeys on different days at different times.

Minho Shin and Arunesh Mishra and William A. Arbaugh. Improving the Latency of 802.11 Hand-offs using Neighbor Graphs. 2004. in Proc. ACM MobiSys, pages 70--83.
The authors claim that 90 p.c. of the time to hand off to a new 802.11 network is used in the probe time in order to detect that network. They therefore aim to reduce this time by giving mobile stations extra information on what APs might be available. This is done by keeping records of what APs have overlapping coverage with which others, and between which APs clients successfully carry out handovers. The former graph can be ascertained by the APs themselves (with some modification?). The authors construct their neighbour graph by means of a mobile station travelling around two floors of a building. Clients use the neighbour and overlap graphs in two ways: firstly, when connected to an AP, they are aware of what channels neighbours of that AP are present on, and where there are no neighbours. Hence, channels that have no neighbours occupying them do not need to be scanned. Also, the client need only wait for the responses to arrive from those APs it knows are on the channel, rather than the full timeout period for a channel scan. Secondly, the coverage overlap graph can be used to eliminate some of the candidates left over from using the neighbour graph. This takes place by knowing that if a beacon is received from a particular AP, we will never hear a beacon from any of those candidate APs whose coverage does not overlap with the first AP. This reduces scan times further. The authors go on to simulate the effects of a larger number of non-overlapping channels and neighbour APs, and find that their algorithms provide further decreases in scan time for these cases. The authors do not propose a solution to the problem of AP removal, where the neighbor graph is incorrect until sufficient clients waste time scanning for the removed AP before the algorithm decides that it is absent, though they argue that clients will report the AP's absence, the graph will be updated, and the new graph distributed to the clients (not really described).

David N. Cottingham and Robert K. Harle. Constructing Accurate, Space-Efficient, Wireless Coverage Maps for Vehicular Contexts. 2008. in Proc. ICST WICON.

David N. Cottingham and Robert K. Harle. Handover-Optimised Routing Over Multi-planar Graphs for Vehicles. 2009. in In progress..

Anastasios Giannoulis and Marco Fiore and Edward W. Knightly. Supporting Vehicular Mobility in Urban Multi-hop Wireless Networks. 2008. in Proc. ACM MobiSys.
Experiments examining the handover delays and mean throughputs achievable by a vehicle when traversing the coverage of a 3 km^2 WiFi mesh network. The authors first evaluate baseline handover policies, namely ``maintain until broken'', ``always strongest signal'', and ``averaged with hysteresis''. The latter uses time-averaging of the AP RSS and a degree of hysteresis in AP switching. Always strongest signal performs better than maintain until broken, but suffers from a large number of handovers. Hysteresis increases mean throughput by 40 p.c.. Because different APs provide different levels of service, long-term quality scores are incorporated into the metric (in addition to the AP quality score of RSS) that express the throughput of the backhaul from the AP, over the mesh network. This increased mean throughput by 25 p.c. vs. the hysteresis algorithm. Causes of association delay are considered, with the conclusion that outdoor mesh networks have far higher numbers of failed associations than indoor (dense) network deployments.


Radio Engineering

Intel. Understanding Wi-Fi and WiMAX as Metro-Access Solutions. 2004. Intel Corp..
Compares IEEE 802.16 WiMAX to 802.11b/g. Wi-Fi is primarily designed for blanket wireless coverage, i.e. LAN environments, and uses DSSS as its modulation scheme. 802.11g is being used to patch last mile deployments, but is not scalable due to its usage of a CSMA/CA protocol. In contrast, WiMAX makes use of OFDM to make it robust against outdoor multipath effects, has variable width channels to avoid bandwidth being wasted with low demand subscribers, and places all channel access scheduling on the base station. It is therefore highly suited to last mile deployments, connecting 802.11b/g hotspots together, whilst 802.11a is used to interconnect APs within hotspots. In addition, WiMAX can be used for backhaul connections to reach remote wired networks.

R. Marks and C. Eklund and K. Stanwood and S. Wang. The 802.16 WirelessMAN MAC: It's Done, but What Is It?. 2001.
Describes in detail the 802.16 MAC layer. Access to the channel is orchestrated by the base station, and subscribers must listen for channel ``maps'' QoS guarantees can therefore be provided. Tranmission is in bursts, and therefore timing is highly important. Modulation and FEC for each connection is dynamically varied depending on conditions to particular subscriber. Standard is flexible in terms of bandwidth and encoding (and therefore bit rate).

J. Sydor. True Broadband for the Countryside. 2004. IEE Communications Engineer, pages 32--36.
Describes testing of fixed wireless broadband services in Canada. Base stations use 16 antennae each, with each antenna having a very narrow coverage zone (a petal). This enables only 6 frequencies to be shared between the 16 antennae. The resulting coverage rosettes can be close packed, and rotated to avoid co-channel interference between base stations. By measuring the performance obtained dynamically the system is able to reconfigure what frequencies different antennae use, and their power, to account for varying interference.

J. Sydor. A Proposed High Data Rate 5.2/5.8 GHz Point to Multipoint MAN System. 2000. IEEE.
In depth description the Canadian MILTON system. Covers path loss due to wind induced tree motion (up to -17 dB), the reduction in polarization isolation due to the urban canopy (i.e. polarisation becomes distorted, and therefore co-channel interference is encountered. The author notes that this effect does not increase with distance, and therefore the effect is likely to be due to isolated strong reflectors), and beam spreading (i.e. on transmission beams were highly directional, whereas on reception they had begun to spread out towards each other). The paper also notes that at long distances from a particular base station, allowing a user to choose between base stations may give better results than restricting them to simply the single one that 'should' be best. Finally, power control on the return link is also mentioned as an important factor in limiting co-channel interference, instead of using a constant transmission power from the subscriber units.

F. Frederiksen and P. Mogensen and J. Berg. Prediction of Path Loss in Environments with High-Raised Buildings. 2000. in Proc. IEEE VTC, pages 898--903.
Examines the performance of several simple path loss algorithms in urban environments. Real data is analysed, and compared with the predictions made by the Ikegami, COST, and Lee models, which incorporate building heights to calculate the diffraction and reflection of the signal. Does not give results for receiver antenna heights of less than 5 m, but notes that at low heights much of the signal power is from reflections (as would be expected).

J.-E. Berg. A Recursive Method For Street Microcell Path Loss Calculations. 1995. in Proc. International Symp. on Personal Indoor and Mobile indoor Communications, pages 140--143.
Describes a simple algorithm that is computationally efficient and suitable for ray tracing, that calculates path loss for street based propagation environments. Considers angle of street orientation and dual slope behaviour (i.e. a ray changes its slope as it is refracted off a building).

F. Harrysson and J.-E. Berg. Propagation prediction at 2.5 GHz close to a roof mounted antenna in an urban environment. 2001. in Proc. IEEE VTC, pages 1261--1263. IEEE.
Uses a simple geometrical optics and a uniform theory of diffraction model to predict path loss for 2.5 GHz system. The results are promising, especially since the model does not take into account multiple reflections or diffraction.

M. Sellars and G. Athanasiadou and B. Ziolko and S. Greaves and A. Hopper. Simulation of Broadband FWA Networks in High-rise Cities with Linear Antenna Polarisation. 2003. in Proc. IEEE PIMRC, pages 371--375.
Describes a ray tracing model for predicting path loss in cities when using 5 GHz fixed wireless broadband. The model is able to take into account up to 2 reflections, as well as path loss due to buildings and foliage. The results suggest that antenna placement on rooftops must be carefully carried out, as certain locations on a single rooftop have far better co-channel intereference characteristics than others, thus affecting upstream performance considerably.

D. Crosby and V. Abhayawardhana and I. Wassell and M. Brown and M. Sellars. Time Variability of the Foliated Fixed Wireless Access Channel at 3.5 GHz. 2005. in Proc. IEEE VTC.

J. Otero and P. Yalamanchili and H.-W. Braun. High Performance Wireless Networking and Weather. 2001. University of California, San Diego.
Describes the effects of weather on long-distance links in the 5 and 2.4 GHz bands. The authors note that precipitation does affect links significantly, but other factors do not. Rain caused a 4 dBm loss over a 17.8 miles link (at 2.4 GHz?). They do posit that wind might jar antennas out of alignment, and hence affect communications, but do not present results that show this.

S. Vasudevan and K. Papagiannaki and C. Diot and J. Kurose and D. Towsley. Facilitating Access Point Selection in IEEE 802.11 Wireless Networks. 2005. in Proc. Internet Measurement Conference.
Describes a method of estimating potential MAC layer bandwidth from the delay experienced by AP beacons. An AP is defined by the IEEE 802.11 standard to transmit a beacon at time zero, and subsequently at a defined interval thereafter (typically 102.4 ms). Network load can be inferred from the elapsed time from scheduled beacon time and actual beacon packet transmission (as the transmission time is included in the beacon headers). By including the delays due to RTS/CTS and ACKs in the calculations, the overall data rate that would be achieved on affiliation to the AP can be calculated passively. This work currently relies on beacon frames not being prioritised over others, and is also only for Distributed Co-ordination Function (DCF) based APs, not PCF ones. Results in an anechoic chamber look promising.

J. Zhang and L. Cheng and I. Marsic. Models for non-intrusive estimation of wireless link bandwidth. 2003. Proc. Personal Wireless Communications Conference, 2775:334--348.
Presents a passive method of using the SNR to estimate the bandwidth available on a wireless link. Shows a non-linear relationship between SNR and BW, and presents a back propagation neural network method and a bayesian method of performing the estimation. They are similar in their results, mean approx. 25 p.c. error. They briefly compare the performance of a model trained indoors and then used in an outdoor environment, and vice versa, and note that the performance is degraded. TCP bandwidth as related to SNR is not evaluated.

G. Durgin and T. S. Rappaport and H. Xu. Measurements and Models for Radio Path Loss and Penetration Loss In and Around Homes and Trees at 5.85 GHz. 1998. IEEE Transactions on Communications, 46(11):1484--1496.
Describes a series of experiments measuring the path loss experienced by a 5.9 GHz signal in surburban environments. Antenna heights of 1.5 m and 5.5 m are used, at distances of 30 m and 210 m from the houses. Three buildings are sampled, and measurements taken at points surrounding them, and in all rooms. Path losses due to brick and wood appear to be equal. Foil insulation accounts for an extra 3 dB loss. A 5.5 m receiver antenna height incurred less path loss than the 1.5 m, most probably due to far less roof diffraction. Indoor rooms had better propagation characteristics if they had large glass windows (without metal covers). Coniferous and deciduous trees had equal losses of 11 to 16 dB, but the authors note that the 1.5 m antenna height worked better for deciduous trees, as these tend to have few branches at a low height.

M. Panjwani and A. Abbott and T. Rappaport. Interactive Computation of Coverage Regions for Wireless Communication in Multifloored Indoor Environments. 1996. IEEE Transactions on Communications, vol 13.
Describes a C program used as an AutoCAD plugin, that uses simple path loss algorithms to estimate approximately 36 points around a wireless transceiver that form the boundary of its coverage area. The algorithms take into account the losses due to free space (AWGN), partitions, and floors. The model also takes into account sources of interference, assuming the same path loss model.

D. Duchamp and N. Reynolds. Measured performance of a wireless LAN. 1992. in Proc. 17th IEEE Conference on Local Computer Networks, pages 494--499.
Describes tests conducted with old 2 Mbit/s WaveLAN cards to ascertain the error rate and range of the technology. Signal strength reported by the kernel was in the format of a ratio of the power of the peak signal as compared to that of the primary side lobe. The performance is greatly affected by reflections from nearby objects (e.g. walls) when the received signal strength is low, and varies significantly on a metre by metre position basis. Errors tended to be in isolated bits rather than runs. Small packets are more likely to be received error free than large ones, boding well for signalling protocols.

J. Lansford and R. Nevo and B. Monello. Wi-Fi and Bluetooth: Enabling Coexistence. 2001. Compliance Engineering, 18(4):30--45.
Details several experiments on the effect of using Bluetooth and WiFi transceivers in close proximity. The authors found that the bandwidth of a WiFi connection is noticeably impaired, due to Bluetooth's frequency hopping transmission technique, which is not predictable from the perspective of the WiFi stations. WiFi performance therefore degrades at a greater rate with distance between the end stations with Bluetooth enabled than without. Meanwhile, WiFi transmissions can cause seemingly random packet losses in a Bluetooth connection, though the effect is not as deleterious as for Bluetooth's effect on WiFi (most probably because Bluetooth use a pseudo random selection of which frequency to hop to).

Chris Snow and Serguei Primak. Performance Evaluation of TCP/IP in Bluetooth Based Systems. 2002. in Proc. IEEE VTC, pages 429--433.
Proposes a model of two Bluetooth channels that are utilised simultaneously. Because of their spatial and temporal correlation, performance is affected. Despite Bluetooth having FEC and ARQ integrated into the data-link layer, the authors note from their simulations that under poor conditions the retransmissions at the data-link layer will be many, causing TCP to timeout, and thus decreasing its performance. No graphs of actual TCP performance are given.

S. Choi and J. Prado and S. Shankar N and S. Mangold. IEEE 802.11e Contention-based Channel Access (EDCF) Performance Evaluation. 2003. in Proc. IEEE ICC.
Compares the IEEE 802.11 MAC DCF to the enhanced DCF in 802.11e. The EDCF allows each end station to map each frame to a QoS class (an Access Category). The AP broadcasts what its current QoS offered is for each class, in terms of the min. and max. contention window, and the AIFSD. This can be dynamically varied by the AP. The smaller the AIFS, the less time is required between frames, giving higher throughput. However, the lower the minimum contention window, the higher the probability of collision. Performance of DCF and EDCF is compared, and the latter found to give better QoS.

I. Stojmenovi'c. Handbook of Wireless Networks and Mobile Computing. 2002. Wiley.

James C. Chen. Measured Performance of 5-GHz 802.11a Wireless LAN Systems. 2001. Atheros Communications, Inc..
Describes a test comparing 802.11a to 802.11b. The former has a greater bandwidth at similar ranges; up to 54 Mbps. It also has 8 licensed channels, thus allowing larger deployments without as great a degree of co-channel interference.

Kamenetsky, M. and Unbehaun, M. Coverage planning for outdoor wireless LAN systems. 2002. in Broadband Communications. International Zurich Seminar on Access, Transmission, Networking, pages 49-1--49-6.
Describes the usage of neighbourhood search (where the algorithm is given an initial solution, and modifies the solution to be one close to that supplied (i.e. a neighbour). This is then adopted if the new solution is more optimal), and simulated annealing (where an initial solution is randomly altered, and evaluated. If it more optimal than that supplied, it is adopted as the input to the next iteration. If it is not, it is adopted (in any case) with probability related to a factor gamma. This allows the algorithm to break out of a ``bad patch''. This factor gamma is reduced as the simulation progresses, a process known as cooling, where the simulation begins to stay in a particular area.), to optimally place WLAN APs in a campus environment. The authors present a new pruning algorithm that greedily optimises the optmisation function (maximise coverage, minimise areas not covered), with far fewer iterative steps than for simulated annealing. All three algorithms are compared, and it is suggested that pruning give a near-optimal result, as input to SA or NS.

G. W"olfle and F. Landstorfer. Dominant Paths for the Field Strength Prediction. 1998. in Proc. IEEE VTC, pages 552--556.
Calculates the received signal strength at each point in a building by means of building a dominant path model. A tree is built of possible paths, and also of corners if there is not LOS. The path loss is calculated for each path, and the dominant one chosen. This appears to require a fair amount of computation, but apparently less than ray tracing, as that only considers reflections and path loss, not losses through walls (whereas the dominant path might well be through a wall). The authors use a neural network to further refine their predictions. It is not clear how susceptible the algorithm is to changes in the environment (e.g. doors opening/closing).

Willig, A. and Kubisch, M. and Hoene, C. and Wolisz, A. Measurements of a wireless link in an industrial environment using an IEEE 802.11-compliant physical layer. 2002. IEEE Transactions on Industrial Electronics, 49(6):1265--1282.
Describes the results of testing an 802.11-like link in an industrial environment. No MAC layer is used, instead the actual physical layer is analysed. The authors conclude that in a varying industrial environment, the behaviour of the connection varies significantly, as do the packet loss and burst loss characteristics. Because of the bursty nature of the connection, there can be long periods of good performance; in addition, errors appear to occur in bursts too. Adaptivity is important in ensuring that when the channel changes, such things as modulation can be changed accordingly to better cope with the medium's characteristics. Care should also be taken in the placement of antennae.

G. W"olfle and P. Wertz and F. Landstorfer. Performance, Accuracy and Generalization Capability of Indoor Propagation Models in Different Types of Buildings. 1999. in Proc. IEEE International Symposium on Personal, Indoor and Mobile Radio Communications.
Four of the most popular models for radio propagation (statistical propagation model, dominant path model, ray-optical models) are compared to one another and to power measurements in different buildings. The comparison is limited to the prediction of the received power. Three different buildings were used for the comparison of the models; a new office block, an older office building with brick walls, and an old villa. The empirical direct-path and statistical models are not very generalizable from the building they are calibrated in. The dominant path and ray optical models are much more accurate. The less accurate the database of walls and furniture, the worse the ray optical model (which is also limited by its number of interactions) becomes, and above this threshold the dominant path model wins out, being tolerant of small errors in the database, as it relies only on knowing what rooms adjoin which others, and where corners are located (see two papers further up).

C. Roehl and H. Woesner and A. Wolisz. A Short Look on Power Saving Mechanisms in the Wireless LAN Standard IEEE 802.11. 1998. in Advances in Wireless Communications, pages 219--226. Kluwer.
Describes the methods used in IEEE 802.11 for power saving in both infrastructure and ad hoc networks. In networks with a fixed AP, beacons from this are broadcast that contain a timestamp which all stations use to synchronise with. This then means that stations wake up to listen to beacons. If traffic is waiting at the AP for a particular station, this is expressed in a Traffic Indication Map (TIM) transmitted directly after the beacon. Mobile nodes then poll the AP for the packets. In ad hoc mode, synchronisation is achieved by mobile nodes competing to send out a beacon. All other nodes then synchonise with the winner. Packets waiting to be sent by stations are announced in a special period known as the Ad Hoc TIM (ATIM), after the beacon, where stations can announce that there are packets waiting for others. The receivers will hear the beacon, then the ATIM. If there are packets waiting, that station will not go to sleep but will instead acknowledge the ATIM, and wait for the packets to be transmitted. After reception nodes may sleep (i.e. turn off their radio receivers) until the next estimated beacon reception time. The authors present simulation results as to how long the ATIM window should be.

Daji Qiao and Sunghyun Choi. Goodput enhancement of IEEE 802.11a wireless LAN via link adaptation. 2001. in Proc. IEEE ICC, pages 1995--2000.
Good overview of the MAC-level workings of IEEE 802.11a. Describes how the packet error varies depending on the FEC rate, and shows goodputs for different SNR situations, and using the different modes of the protocol (varying modulation schemes and coding rates). The authors note that the PLCP header is always transmitted using BPSK modulation and an FEC rate of 1/2, regardless of the mode for the remainder of the transmission. They also find that the goodput for QPSK with rate 1/2 is always greater (irrespective of SNR) than BPSK with rate 3/4. They attribute this to the fact that the 3/4 rate code is achieved by puncturing the 1/2 rate code, resulting in far poorer error correction, offsetting the less robust modulation scheme. Another point is that fragmentation generally decreases goodput (framing overheads etc.), but at low SNRs it increases it (if a fixed percentage of packets reach their destination more short fragments are better). The paper concludes by recommending that an adaptive mode choosing scheme would be beneficial in order to optimise goodput.

IEEE Working Group 11. IEEE Std 802.11-1999 Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications. 1999. IEEE.

IEEE Working Group 11. IEEE Std 802.11-1999 Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications: High Speed Physical Layer in the 5 GHz Band. 1999. IEEE.

IEEE 802.20 Working Group. IEEE Draft Standard P802.20 (D4.1m) for Local and Metropolitan Area Networks -- Standard Air Interface for Mobile Broadband Wireless Access Systems Supporting Vehicular Mobility -- Physical and Media Access Control Layer Specification. 2008. IEEE.

IEEE 802.16 Working Group on Broadband Wireless Access. IEEE Standard 802.16e-2005 for Local and metropolitan area networks Part 16: Air Interface for Fixed and Mobile Broadband Wireless Access Systems Amendment 2: Physical and Medium Access Control Layers for Combined Fixed and Mobile Operation in Licensed Bands. 2006. IEEE.

IEEE Working Group 11. IEEE Std 802.11g-2003 (Amendment to IEEE Std 802.11, 1999 Edn. (Reaff 2003) as amended by IEEE Stds 802.11a-1999, 802.11b-1999, 802.11b-1999/Cor 1-2001, and 802.11d-2001). 2003. IEEE.

Senza Fili Consulting. Fixed, nomadic, portable and mobile applications for 802.16-2004 and 802.16e WiMAX networks. 2005. WiMAX Forum.
Good overview of the WiMAX standard, data rates expected, and modulation schemes.

H'ector Velayos and Gunnar Karlsson. Techniques to Reduce IEEE 802.11b MAC Layer Handover Time. 2003. TRITA-IMIT-LCN R 03:02, KTH, Royal Institute of Technology, Stockholm, Sweden.
Various mechanisms to improve 802.11b handover times, including the modification of the beacon interval of the AP. Detection time can be improved by detecting lost packets, whilst search times can be reduced by using active (rather than passive waiting for beacons) searching.

C. Sweet and V. Devarapalli and D. Sidhu. IEEE 802.11 Performance in an Ad-Hoc Environment. 1999. UMBC Department of Computer Science & Electrical Engineering.
Describes various IEEE 802.11b simulations, concentrating particularly on the performance of the PCF and DCF modes. The authors find that at greater than 100 p.c. loads, PCF performs better than DCF, as the former ensures that there is less contention for the medium by the co-ordinator polling stations, rather than the medium being random access. They also show how the throughput varies with packet fragmentation threshold, and with offered load (number of stations). Finally, they examine the effect of propagation delay and beacon interval, the latter because it affects the length of the contention free period (CFP) used in the PCF. The longer the CFP, the greater the fraction of time that access to the medium is controlled by the co-ordinator, and therefore the lower the contention. At higher loads, the beacon interval at which the PCF performs better than the DCF is lower.

Maksuriwong, K. and Varavithya, V. and Chaiyaratana, N. Wireless LAN access point placement using a multi-objective genetic algorithm. 2003. in Proc. IEEE International Conference on Systems, Man and Cybernetics, pages 1944--1949.
Uses a genetic algorithm in order to maximize SNR over a coverage area, placing multiple access points.

Eddie Green. Radio Link Design for Microcellular Systems. 1990. British Telecom Technology Journal, 8(1):85--96.
Describes various experiments measuring the signal strength of microwaves in urban and highway environments. The 4-path model is described, where in an urban canyon both reflections from the road (2-path), and the walls must be considered. The four path model shows good agreement with real data from an urban canyon. In contrast, highways appear to be better represented by the 2-path model. Double regression is used on the results in order to ascertain whether there exists a breakpoint in the data, i.e., at what value of the independent variable is the data better represented by a different curve to that used for the first section of the data. This breakpoint is noted to be partly contingent on antenna height and signal wavelength. The existence of the breakpoint has important implications for microcell sizes, to ensure that handovers proceed smoothly. The author also comments on the variability of a signal in the microcell case, and notes that the RSSI has significant dips and peaks. This translates into large variations in BER, which may cause call dropping for voice transmissions. The paper presents an analysis of how the K-factor (related to how dominant a single path is in the received signal: a higher K-factor implies a greater dominance) varies for micro- and macro-cells. Microcells experience a higher, but much more variable K-factor, whereas macrocells have a relatively constant, but low, K-factor.

D. Kotz and C. Newport and C. Elliott. The mistaken axioms of wireless network research. 2003. Dartmouth College Computer Science Department.
Outlines five axioms generally ignored by researchers conducting simulations to assess the performance of mobile ad hoc networks routing protocols, and proves using experimental data that this is the case. The axioms are: the world is flat (i.e. no account is taken for terrain variations, such as hills, which may both improve or eliminate connection quality), a radio's transmission are is circular (i.e. transmission areas are not even convex, and may be non-contiguous), all radios have equal range (occlusions such as buildings or foliage, or antenna height mean that ranges vary wildly), connections are symmetrical (in many cases link quality is not symmetrical, and may be completely asymmetrical), any degree of signal strength implies perfect connectivity (very little account is taken of errors in the radio channel arising from interference), signal strength is a simple function of distance (related to the first coverage axiom, this implies that triangulation to find a position cannot be done accurately on the basis of signal strength implying distance to transmitter).

Qing Xu and Raja Sengupta and Daniel Jiang. Design and analysis of highway safety communication protocol in 5.9 GHz dedicated short range communication spectrum. 2003. in Proc. IEEE VTC, pages 2451--2455.
Describes the expected operation of the DSRC protocol for intervehicular applications, with a focus on the QoS required for safety uses. The system works using broadcast, rather than unicast packets, but to address a particular vehicle there must be location-based addressing. This LBA is obtained using the broadcast technique. In turn broadcasts must be co-ordinated such that they do not interfere with each other and render the system unusable. There are therefore trade-offs, such as the range of the radio transmissions, and the rate messages may be generated at. The authors describe an LBB protocol, and analyse it using a simple traffic simulation. The protocol is essentially based around transmitting each message multiple times, but timing these transmissions, and the number of them carefully in order to avoid collisions.

Tetsuro Imai and Tokio Saga. Statistical Scattering Model in Urban Propagation Environment. 2006. IEEE Transactions on Vehicular Technology, 55(4):1081--1093.
The authors conjecture that the scattering pattern of a transmitted wave down an urban street will be in an elliptical shape rather than circular. They therefore provide a mathematical basis for ascertaining the dimensions of this ellipse (ratio of height to width), and go on to perform an experiment in a city where the scattering is measured. They point out that the scattering pattern is an ellipse whose major axis is aligned to the street's axis. Also, the scattering power density around the mobile station is expressed by a combination of two Laplacian distributions with differing standard deviations.

Gaertner, G. and Cahill, V. Understanding link quality in 802.11 mobile ad hoc networks. 2004. IEEE Internet Computing, 8(1):55--60.
Overviews how 802.11 WLAN connections can vary in quality. The authors evaluate how signal strength varies with user orientation, and also when a car is positioned between the two communicating nodes (at a distance of 1 metre this causes significant TCP losses, at 5 metres it continues to have an effect). They also note that antenna height and equipment manufacturer plays a role.

Adam Collura and James R. Petrus and Scott Weber. Investigation of Passive Wireless Bandwidth Measurement on a Loaded Network Using Signal Strength. 2002. Lehigh University.
Records how IEEE 802.11b signal strength and UDP throughput vary. With a single client, throughput is correlated with RSSI. However, with multiple clients sending traffic at random intervals, the correlation is not present.

R. K. Guha and S. Sarkar. Characterizing temporal SNR variation in 802.11 networks. 2006. in Proc. IEEE WCNC, pages 1408--1413.
The authors construct a birth-death many-state Markov model, which they then use to predict future SNR and hence packet loss. They compare the results of their predictions with the actual packet loss rates for real traces. The SNR prediction errors are under 4 p.c.. The authors note that at different times of day, the usage of an AP can vary signficantly, and hence so does the channel. Thus, a different model must be used for each different time period, but these models can be re-used each day. The authors also point out that the traditionally used 2-state Markov model works well when there are few other users of the AP, but that their many-state model performs better with higher AP user loads.

Chen Na and J. K. Chen and T. S. Rappaport. Measured Traffic Statistics and Throughput of IEEE 802.11b Public WLAN Hotspots with Three Different Applications. 2006. IEEE Transactions on Wireless Communications, 5(11):3296--3305.
Describes various measurements of throughput, packet size distribution etc. at 4 different public WLAN hostpots at restaurants in Texas, USA. The authors give graphs of throughput vs. SNR, and apply two different curve fitting algorithms to them. They report that there is good correlation between SNR and throughput, provided that a building-calibrated model is used for the curve fitting.

Benjamin E. Henty. Throughput Measurements and Empirical Prediction Models for IEEE 802.11b Wireless LAN (WLAN) Installations. 2001. Master's thesis, Virginia Polytechnic Institute and State University.
Describes a measurement campaign of WLAN APs and their signal strengths. Proposes, among other ideas, the usage of linear and exponential fits to plots of throughput vs SNR. These appear to work well for smoothed data. Also develops an equation for the case with 2 users.

D. H, Layer. Digital radio takes to the road. 2001. IEEE Spectrum, 38(7):40--46.
Describes the Sirius and XM radio satellite deployments. Both transmit in the 2.3 GHz band, 25 MHz wide. Each of the 50 channels carries 4 to 4.4 Mb/s of data (before channel coding). Both systems use terrestrial repeaters to cover difficult areas such as tunnels. XM uses 2 geostationary satellites, Sirius 3 elliptical orbit ones. Receiver units use omnidirectional antennas. All satellites transmit the same signals, but on different frequencies, and shifted in time by 4-5 seconds. This ensures that if one satellite is blocked, the audio stream can be maintained. The article also mentions the in-band/on channel (IBOC) digital broadcasting technique, where digital transmissions are placed on the side channels of existing AM radio transmissions.

Evans, B. and Werner, M. and Lutz, E. and Bousquet, M. and Corazza, G.E. and Maral, G. and Rumeau, R. Integration of satellite and terrestrial systems in future multimedia communications. 2005. IEEE Wireless Communications, 12(5):72--80.
Describes how satellites are unlikely to be used for broadband access on their own, but instead as part of an integrated network, used to deliver multimedia content. Analogue satellites simply reflect beams they receive from earth, and hence do not amplify the signal (or perform any type of switching). This then means that with VSATs sometimes a hub station with a large antenna is needed that receives the transmission, decodes it, then retransmits at high power in order that another VSAT is able to receive it. Services for mobile users are available, particularly from GEO satellites (where user equipment antennas do not have to track the path of the satellites). The demand for frequencies in the 3-6 GHz band is likely to be greatest, as this is closest to the bands used for terrestrial cellular systems (and hence integrating satellite communication into the same handsets will be cheapest with closest bands). In 2005 INMARSAT IV took digital communication throughputs from 64 kb/s to 438 kb/s. Increasing the power, number of spot beams, and reflector array size of satellites will boost their effectiveness.

Nortel Networks. HSDPA and Beyond. 2005. Nortel Networks.
Good overview of the High Speed Downlink Packet Access protocol of the UMTS standard. Also overviews HSUPA and HSOPA.

G'erard Maral. VSAT Networks. 2003. Wiley. 2nd edition.
Good overview of Very Small Apperture Terminals used for satellite communications. Key points to note are that VSATs generally have small antennas, and low power budgets, and hence are more susceptible to interference effects than large earth stations. Communications between two VSATs are generally bounced off the satellite to an earth station, decoded, and re-transmitted to be bounced off a satellite to the receiving VSAT, as otherwise the amount of attenuation would mean that the receiver (with its small antenna) would not be able to decode the original transmitted signal. VSATs also work well with geostationary satellites, in order that the antenna is not required to be steerable (extra cost/space). Spot beams from satellites can aid reception and decrease interference. VSAT telephones are available that are a similar size to old cellular handsets. Upload throughputs are of the order of 64-128 Kbits/s.

Levente Butty'an and Jean-Pierre Hubaux. Security and Cooperation in Wireless Networks. 2007. Cambridge University Press.

E. C. Efstathiou and P. A. Frangoudis and G. C. Polyzos. Stimulating Participation in Wireless Community Networks. 2006. in Proc. IEEE INFOCOM, pages 23--29.
Describes a receipt-based scheme, where users of community WiFi networks are encouraged to participate by means of basing the amount of service they are given on how much they provide. Each usage of an AP involves the client providing the AP with a receipt (which is signed by the client). These receipts are stored on a central server (or multiple servers) and are used to construct a directed graph of usage, with weights corresponding to the service provided. This can be used to see whether a user 'owes' another one, and hence whther they should provide them service. Various attacks are considered and mitigated against. Simulations are used to assess performance. The protocol has been implemented on Linksys hardware, and the actual performance measured.

Steven Ashley. Cognitive Radio. 2006. Scientific American, pages 66--73.
Overviews cognitive radio, as distinct from Software Defined Radio (SDR). In the latter, the radio hardware is able to transmit and receive on multople frequencies and using various different standards. Cognitive Radio (CR) builds on this by ascertaining _when_ each of these different frequencies/standards should be used, i.e. when is it best to hand off to a different technology, perhaps even for a few seconds? The author asserts that there is no real shortage of spectrum, given that much of it is underused at most points in space time, and the reason for the current allocation is to satisfy peak demand. Adaptivity will ensure better pricing for consumers. Sensing the environment is key for a CR, as it will need to choose the correct frequency to transmit on, as well as ensuring that it focuses its transmission energy towards its intended receiver. Such sensing will require high sensitivity receivers. The author also notes that the information a CR senses about its environment can be passed to other CRs for cooperative transmission.

Roy Rubenstein. Radios Get Smart. 2007. IEEE Spectrum, pages 46--50.
Another overview of cognitive radio. The point it made that two CRs attempting to communicate must somehow find each other in the spectrum (i.e. choose the same frequency to communicate on). Solutions currently involve sending out easily recognisable probe waveforms on candidate frequencies. Once they can communicate, the two CRs exchange information on what frequencies around them are unavailable. The IEEE 802.22 group intends for a base station to gather information from CRs around it, and then distribute the information back out, as well as describing what channels can be used at any point in time. Ofcom in the UK intends for spectrum licensees to be able to sub-license spectrum, (and hence potentially allow cognitive radios to exist in it), but the regulator does not currently intend for cognitive radio to be used over wide bands of spectrum, unlike the FCC in the USA.

Qing Zhao and Brian M. Sadler. A Survey of Dynamic Spectrum Access. 2007. IEEE Signal Processing Magazine, pages 79--89.
Overviews the field, with a good number of references. Distinguishes between the dynamic exclusive model (which can be split into spectrum property rights, where licensees are free to trade and sub-allocate, and dynamic spectrum allocation, which is the current situation but with much more fast changing allocations), the open sharing model (spectrum commons: similar to the idea of the ISM bands currently, where users must co-exist somehow), and the hierarchical access model (which can be split into spectrum underlay, where users transmit on a wide band at low power, as in UWB, and spectrum overlay or opportunistic spectrum access, where users detect unused slots on licensed frequencies and opportunistically use them for a short period of time). In the hierarchical access model, the idea is that secondary users do not noticeably interfere with primary users (the licensees). The paper then goes on to describe the interference constraint, based on the probability of correctly detecting a free channel against the probablity of incorrectly identifying a channel as free and hence interfering with a primary user. Techniques for the detection of primary transmitters/receivers are briefly overviewed. Spectrum opportunity tracking using Partially Obervable Markov Decision Processes (POMDP) are mentioned. How secondary users (multiple cognitive radios) will share out access to spectrum is also presented as a graph colouring problem. Finally, policy considerations for CRs are discussed.

P. Leaves and J. Huschke and R. Tafazolli. A Summary of Dynamic Spectrum Allocation Results from DRiVE. 2002. in Proc. IST Mobile Summit, pages 245--250.
Simulations used to share spectrum between UMTS and DVB-T. The authors attempt to predict the loads on the two networks, and share spare spectrum between them. This is done by using a historical database of usage, along with time-series prediction to allow for transient effects not reflected in the database. The paper also describes a spatial (rather than temporal) re-allocation scheme, where the algorithm must ensure than neighbouring base stations do not utilise the same frequency. The evaluation examines how much better the spectrum utilisation is with such dynamic allocation schemes than without it.

P. Leaves and M. Breveglieri and C. Hamacher and D. Grandblaise and F. Migneret and K. Moessner and D. Bourse. Dynamic Spectrum Allocation and System Coexistence in Reconfigurable Multi-Radio Networks. 2003. in Proc. IST Mobile Summit.
Overviews the work in the OverDRiVE project on Dynamic Spectrum Allocation. The two types considered are temporal DSA and spatial DSA. Necessary minimal distance and guard band sizes for protection of users from each other are considered, though most results are TBC in the online version of the paper.

Stefan Geirhofer and Lang Tong and Brian M. Sadler. Dynamic Spectrum Access in the Time Domain: Modeling and Exploiting White Space. 2007. IEEE Communications Magazine, 45(5):66--72.
Using a testbed of WLAN stations, a statistical model was derived for when the WiFi channel was idle. This allows better co-existence with Bluetooth by changing the probability of transmission based on how many idle slots have been heard thus far. The authors point out that spatially diverse sensing is important, as are models of how the channel's usage varies at any one location over time. Such models can be conceived by knowledge of the traffic statistics and technological characteristics of the transmitters in that band.

Richard S. Whitt. Ex Parte Filing to the FCC. 2007. FCC Proceeding 06-229, Google Inc..
Google's submission to the FCC of 21/05/2007, requesting that the FCC clarify that dynamic auctions of spectrum are allowed under current and new licensing arrangements. This is where a licensee chooses to sub-license, on a dynamic basis, its spectrum to other providers, thus recouping the costs of its initial license fee. Google also advocates the use of spectrum sensing secondary user devices in primary user bands. Two dynamic auction methods are proposed: firstly, devices are allocated short-term caps on power outputs at particular frequencies at particular geographical locations. Secondly, users pay a one-off access fee on purchasing their device. The spectrum that device uses has service provided on it by a dynamically assigned provider. The submission also mentions the 6 MHz of unpaired spectrum in band E of the TV spectrum, which it points out is ripe for use for two-way Internet-based communications.

Richard S. Whitt. Ex Parte Filing to the FCC. 2007. Proceeding 06-129, Google Inc..
Google's submission to the FCC of 09/07/2007, requesting that the Commission force bidders for spectrum in the 700 MHz band to allow open applications, open devices, open services and open networks. This would allow users to install and use whatever applications/services they wished, on whatever devices that were capable of connecting to the wireless network, and allow ISPs to request connection to the 'Last mile' of the wireless networks, e.g. the cellular towers. Such open access provisions would ensure that purchasers of spectrum promoted more efficient use of it. Google also point out that without such guarantees, incumbents have a much greater financial incentive than new entrants in block purcchasing spectrum, and then letting it lie fallow in order to further their current business models.

Richard S. Whitt. Ex Parte Filing to the FCC. 2007. FCC Proceeding 06-229, Google Inc..
Google's submission to the FCC of 24/07/2007. Emphasises the benefits of an entirely open approach to spectrum, and describes the opportunities that would exist under the current draft of the regulations for operators to circumbent the ideal of open access, such as by offering incentives for users to purchase devices that were locked to a network, or restricting the use of VoIP software from other providers because it was not licensed by the operator. The submission also asks for enforceable penalties to be included in the regulation to ensure that licencees abide by the open access (etc.) terms.

Scott Blake Harris and Edmond J. Thomas and S. Roberts Carter. Comments of Dell Inc., Google Inc., The Hewlett-Packard Company, Intel Corp., Microsoft Corp., and Philips Electronics North America Corp. in the matters of unlicensed operation in the TV Broadcast Bands and Additional Spectrum for Unlicensed Devices Below 900 MHz and in the 3 GHz band. 2007. FCC Proceeding 04-186, Harris, Wiltshire and Grannis LLP.
The coalition of companies submitting these comments is willing to supply the FCC with a spectrum sensing device that is capable of ensuring that its transmissions are of a low enough power, and its sensing of great enough sensitivity, that it will not interfere with licensed TV-band transmitters/receivers. The coalition is of the opinion that a geolocation database as proposed by the FCC is a costly approach, requiring indoor and outdoor location systems, and the database to be maintained. Unlicensed TV band devices should use transmission power control to further decrease the probability of interference. As regards the hidden node problem, the coalition note that the device will have a receiver that is substantially more sensitive than a normal TV receiver, and moreover will only have to detect (using pattern matching algorithms) the presence of a TV signal, and hence not require the degree of SNR required for picture decoding. The coalition also believes distributed sensing is unecessary, preferring instead that all devices utilise sensitive enough receivers and low enough transmit powers. The coalition also points out that both licensed and unlicensed devices are capable of being modified by consumers to cause interference in bands which were not destined for use by them, and hence there is no advantage to licensing spectrum using this as a reason. Overall, the coalition compares TV band white spaces to the 2.4 GHz band, where the applications have flourished, and for which the economic gains have been great.

Jim Hoffmeyer. IEEE 1900: Next Generation Radio and Spectrum Management. 2006. IEEE.
Overview the work of the IEEE 1900 committee. This is split into several working groups, including one writing definitions, another on measuring interference between transmitting devices, and another on conformance evaluation of the building blocks in an SDR. Two study groups also exist, one on regulatory policies, and the other on optimising spectrum use by devices that are able to use multiple radio technologies using co-ordinated sensing (IEEE 1900.4). The latter will establish a protocol for what/how this information is distributed between CRs. This committee is for general CR technology, whereas the IEEE 802.22 working group is concerned with CR in the TV-bands.

Anil Shukla and Aynur Alptekin and Julie Bradford and Eddie Burbidge and Derek Chandler and Mike Kennett and Paul Levine and Stephan Weiss. Cognitive Radio Technology: A Study for Ofcom. 2007. QINETIQ/06/00420, QinetiQ.
Report by QinetiQ for OfCom on the potential and future deployments of cognitive radio. The four applications identified were mobile multimedia downloads which require moderate data rates and near-ubiquitous coverage, emergency communications that require moderate data rate and localised coverage, broadband wireless networking, and multimedia wireless networking requiring high data rates (e.g. within homes). Concern is expressed over the hidden node problem, but the port describes this as not being insurmountable. Security of spectrum sensing (whether by database, monitoring stations, or direct sensing by CRs themselves) is a concern (e.g. how is the data from sensing transmitted to CRs without being maliciously corrupted along the way?). Also, CRs will need clearly specified regulatory policies in order to be aware of how they should behave in certain situations (e.g. near airports). A network of monitoring stations which help keep a spectrum database up-to-date is also given as one way of controlling accesss to spectrum. The deployment of these could take 5-10 years. Appendix H of the document describes spectrum control techniques in detail. Dynamic Frequency Selection (as will be used in 802.16) is described, as is Free Channel Search (as used in military radios; all radios step through each possible channel in a synchronised fashion, and may use link quality assessment to decide which channel should be used). A purely database-driven control technique, where there is a central co-ordinator for a network of CRs is also described, with the database being issued by the spectrum regulator, and hence not frequently updated (thus not allowing highly dynamic changes in spectrum usage). This could be developed so that each CR had such a database and a GPS receiver, and was thus aware of what primary users there were on each frequency. A database approach would allow primary users to specify what conditions would apply to any secondary users of their spectrum. Another approach is to use a database that would be supplemented by one or more monitoring stations. A single monitoring station might be used at the central node of a CR star-shaped network, or one monitoring station at each CR, whilst a nationwide or regional network of monitoring stations could also be used. Decoupling CRs from monitoring stations has the advantage that monitoring stations may be placed in order to cover much of the country, and hence be aware of primary users that might otherwise not be detected (and hence suffer from the hidden node problem) if solely CRs were used as monitoring stations. Monitoring stations update the central database, which is then communicated out to all CRs. The final approach is that CRs use only sensing to ascertain free channels, and no database. This is seen as problematic, due to the hidden node problem, and due to the added overhead of having to perform data processing to calculate what the operation times/traffic patterns of the primary user are. Spectrum policing/charging is seen as crucial, as spectrum is an economic resource, and hence this is also a concern when not using a database approach.

Timo A. Weiss and Friedrich K. Jondral. Spectrum Pooling: An Innovative Strategy for the Enhancement of Spectrum Efficiency. 2004. IEEE Communications Magazine, 42(3):S8--S14.
Describes spectrum pooling, and points out that devices that are able to perform such pooling must ensure that they do not collide with legacy devices. Hence, the paper describes using the OFDM physical layer to both sense free channels, and only transmit on those free channels ('transmitting' zeroes on the used ones).

Amir Ghasemi and Elvino S. Sousa. Collaborative spectrum sensing for opportunistic access in fading environments. 2005. in Proc. 1st IEEE Symposium on Dynamic Spectrum Access Networks, pages 131--136.
Mutliple nodes sense the spectrum for the presence of absence of a primary user. This improves the correctness of primary user detection, as the results from the sensing can be combined to give a decision that is likely to be less affected by shadowing, as multiple users in relatively close proximity are unlikely to be in the same region of radio shadow. The authors also point out that when the shadowing is correlated, larger separation distances between users are required to achieve the same detection performance.

S. M. Mishra and A. Sahai and R. W. Brodersen. Cooperative Sensing among Cognitive Radios. 2006. in Proc. IEEE ICC, pages 1658--1663.
Using multiple radios to decide whether a primary user exists in a particular radio band. Each cognitive radio reports a binary interpretation of whether there is a primary receiver in the band. The decision strategy used can cope with if 1/N users failing (either due to error or malice), ``performance by cooperation gains are limited to what is possible with N trusted users.'' The impact of adversaries who always say ``yes'' or ``no'' is also analysed.

Qiwei Wang and Haitao Zheng. Route and spectrum selection in dynamic spectrum networks. 2006. in Proc. IEEE CCNC, pages 625--629.
Describes various ways to assign channels to nodes that decreases the overlap between them, thus increasing throughput for multi-hop networks, where messages must be routed through multiple nodes. The algorithms are evaluated by simulation.

Aditya Akella and Glenn Judd and Srinivasan Seshan and Peter Steenkiste. Self-Management in Chaotic Wireless Deployments. 2005. in Proc. ACM MobiCom, pages 185--199.
Using war-driving traces from PlanetLab and WiFiMaps, the authors examine what channels APs in chaotic (urban) deployments tend to use. They find that many APs are left on their default channels. They also find that the deployment density is such that there is very likely to be interference between neighbouring APs. Simulation is used to evaluate this interference. The positions of the APs in the simulation are taken from the inferred positions of the APs in the traces. The simulations examine what throughputs are achievable at different AP separations (the 'stretch' value). The authors examine how an optimal channel allocation strategy performs in this regard. They then also look at transmission power control to reduce interference (and hence increase possible AP density). Finally, they examine power and rate selection algorithms that aim to provide power control that does not impact tranmission rates, but does minimise interference with other APs. These algorithms are evaluated using real equipment with suitably tweaked drivers.

Aditya Akella and Glenn Judd and Srinivasan Seshan and Peter Steenkiste. Self-management in chaotic wireless deployments: Extended Version. 2007. Wireless Networks, 13(6):737--755.
Extended version of above paper. Further details are given concerning the statistics of the different data sets. In particular, the authors derive the convex hulls of all points seen for each AP, and thence calculate the coverage area of each access point. The results suggest that some APs have coverage areas of greater than 100,000 square metres. These were situated near a river, where the area is more open. Also, the number of APs determined to be in range of each other was in many cases 3 or greater for each AP, implying dense deployments. At least one AP in Portland had 85 neighbours.

Ranveer Chandra and Paramvir Bahl and Pradeep Bahl. MultiNet: Connecting to Multiple IEEE 802.11 Networks Using a Single Wireless Card. 2004. in Proc. IEEE INFOCOM.
By virtualizing the network stack for a single wireless interface card under Windows XP, the authors are able to connect to multiple wireless networks simultaneously. They evaluate different network switching strategies, comparing the length of time a typical FTP transfer takes. They also analyse the energy saved by using MultiNet as opposed to multiple wireless cards. They point out that the switching behvaiour has a detrimental effect to TCP, as RTTs will vary depending on whether packets are being buffered or conveyed directly to the card (depends whether it is currently on that network).

Joshua Bardwell. You Believe You Understand What You Think I Said. 2004. Connect802 Corporation.
Detailed explanation on the different (confusing) uses of the terms RSSI, SNR, and Signal Quality. RSSI is a manufacturer-specific measure of the received signal power of a transmitter, as compared to a particular maximum value. In other words, a manufacturer will choose a maximum RSSI value (e.g. 60), and map power values (in dBm) to steps of RSSI between 0 and 60. RSSI is then normally given as a percentage of the maximum RSSI value. This makes manufacter RSSI indications incomparable. Signal quality is defined by the IEEE standard to be a measure of the PN code correlation, i.e. (we assume) the percentage of bits that compose a single symbol (that in turn represents a single data bit) that are not corrupt. As an example, with a PN code of length 10, were 4 bits to be corrupt then the Signal Quality would be 60 p.c.. Clearly RSSI may be high, but interference may cause a low SQ (and hence a low data rate). SQ is only relevant for DSSS (given that this is the only modulation scheme that uses multiple transmitted bits per symbol). SNR must be carefully defined. There is thermal noise present on the receiver hardware that defines the minimum power of signal that must be received in order for the card to detect it (i.e. for the thermal noise to not overcome it): the receiver sensitivity. Ambient RF noise is the more usual definition, and SNR indicates the relative powers of the signal relative to the RF interference. Note that receive sensitivity must be quoted at a particular data rate (i.e. a particular modulation scheme), with a particular BER. Finally, fade margin indicates the number of dB that are assumed to be potentially lost due to attenuation through the propagation medium (irrespective of RF interference). Engineers are likely to include a fade budget in order to allow a safety margin with respect to their estimates of coverage areas. This ensures that a receiver will always be able to receive a signal that is well above its receiver sensitivity. Finally, the author notes that circa 2002 Cisco cards, when received power is below -90 dBm, the driver code is set to report -75 dBm.

Giuseppe Bianchi, Fabrizio Formisano, Domenico Giustiniano. 802.11b/g Link Level Measurements for an Outdoor Wireless Campus Network. 2006. in Proc. IEEE WoWMoM, pages 525--530.
Compares the performance of outdoor point to point links using IEEE 802.11b and 802.11g. The traffic used to test the links consisted of ICMP ping packets, at a throughput of 1.2 Mbits/s. The delivery rate for 802.11b at 1 Mb/s and 11 Mb/s, and 802.11g 6 Mb/s and 12 Mb/s were compared. 802.11b at 1 Mb/s worked best, with 802.11g not performing at all well. The authors conclude that this is due to the shorter PLCP preambles and ``limited tolerance to the cyclic prefix'', despite the fact that OFDM is more resistant to multipath effects.

M Yarvis and K Papagiannaki and WS Conner. Characterization of 802.11 wireless networks in the home. 2005. in Proc. Workshop on Wireless Network Measurements.
Measurement study of 802.11b and 802.11a in three separate homes. UDP loss rates were measured, and different transmission powers compared. The results showed that loss rates and node separation were not correlated, that even in a home (a relatively small area) connectivity between all rooms is not guaranteed, links can be asymmetric (as seen in other work), and that microwave oven interference is minimal as regards 802.11b loss rates. 802.11a appeared to have a more 'on or off' nature than 802.11b.

K. Papagiannaki and M. Yarvis and W. S. Conner. Experimental Characterization of Home Wireless Networks and Design Implications. 2006. in Proc. IEEE INFOCOM, pages 1--13.
Somewhat extended version of above paper.

Harry Holma and Antti Toskala. WCDMA for UMTS. 2002. John Wiley & Sons. 2nd edition.
Excellent book covering the lower layers of the UMTS cellular telephony standard.

Faqir Zarrar Yousaf and Kai Daniel and Christian Wietfeld. Performance Evaluation of IEEE 802.16 WiMAX Link With Respect to Higher Layer Protocols. 2007. in Proc. ISWCS, pages 180--184.
Experimental evaluation of WiMax measuring TCP and UDP uplink and downlink throughputs. Three different distances were used, 220 m, 5400 m and 9400 m. All modulation schemes were tested. Low transmit powers were used, without modulation scheme adaptation. TCP and UDP throughputs of up to 10 Mbps were observed, even at the highest range. Rain was observed to increase packet loss on the link (3.5 GHz).

D. Greenwood and L. Hanzo. Characterisation of Mobile Radio Channels. 1999. in Mobile Radio Communications, pages 91--186. John Wiley.
Good introduction to channel modeling, particularly Rician and Rayleigh fading.

Bernard H. Walke. Mobile Radio Networks: Networking, Protocols and Traffic Performance. 2002. Wiley. 2nd edition.

Steve Stroh. Wideband: multimedia unplugged. 2003. IEEE Spectrum, 40(9):23--27.
Overviews UWB, giving a range of 10 metres, and data rates of between 100 and 500 Mbit/s. Also mentions UWB over cable.

3GPP. 3GPP Technical Specification 25.213 version 7.5.0 Release 7: Universal Mobile Telecommunications System (UMTS) Spreading and Modulation (FDD). 2008. ETSI.

3GPP. Universal Mobile Telecommunications System (UMTS); UTRA high speed downlink packet access (3GPP TR 25.950 version 4.0.0 Release 4). 2001. ETSI.

Bob Pearson. Complementary Code Keying Made Simple. 2001. AN.9850.2, Intersil.
Good description of CCK, its origins, and how it applies to IEEE 802.11b.

Matthew Gast. 802.11 Wireless Networks: The Definitive Guide. 2005. O'Reilly. 2nd edition.
Excellent overview of each of the amendments to the 802.11 standard, including 802.11n.

Jim Geier. Wireless LANs: Implementing Interoperable Networks. 1999. Macmillan Technical Publishing. 1st edition.

Houda Labiod and Hossam Afifi and Costantino De Santis. Wi-Fi$^TM$ Bluetooth$^TM$ ZigBeee$^TM$ and WiMax$^TM$. 2007. Springer.

Hagit Messer and Artem Zinevich and Pinhas Alpert. Environmental Monitoring by Wireless Communication Networks. 2006. Science, 312:713.
Short description of using cellular backhaul links (in the 10s of GHz range) to measure rainfall. The authors provide graphs of their measurements, those from a rain gauge, and those from a weather radar. The correlation is promising, though the technique is more accurate than radar and less accurate than a rain gauge.

Hagit Messer. Rainfall Monitoring Using Cellular Networks. 2007. IEEE Signal Processing Magazine, 24(3):142--144.
Longer version of the above article. Includes the objective to use cellular handsets to measure rainfall as well, but notes that the 1-2 GHz frequencies used are only weakly sensitive to rainfall.

R. Olsen and D. Rogers and D. Hodge. The $aR^b$ Relation in the Calculation of Rain Attenuation. 1978. IEEE Transactions on Antennas and Propagation, 26(2):318--329.
Provides calculations for the rain fade at frequencies between 1 Hz and 1 GHz. The attenuation rate $A$ in dB/km is modelled to depend on the rain rate, $R$, and two frequency and rain temperature dependent parameters, $a$ and $b$. For 2.5 GHz, 20 degrees C, $a$ is approximately 3.0e-05, and $b$ 0.95 (depending on the rain distribution used). For a rain rate of 50 mm/h, this equates to 0.002 dB/km, i.e. insignificant. A theoretical basis for the equation is also given. Even at 5 GHz, with less than 100 mm/h of rain, the attenuation is less than 0.1 dB/km.

Masayasu Hata and Shigeyuki Doi. Propagation Tests for 23 GHz and 40 GHz. 1983. IEEE Journal on Selected Areas in Communications, 1(4):658--673.
Describes tests over a 1.08 km link of 23 and 40 GHz. Attenuation was correlated with rain rate. Rain was measured using several different gauges, and different expressions for calculating the rate of rainfall from the measurements (how to integrate them over time) were evaluated. The path loss measurements were compared to existing predictions. Snow and ice building up on the radome were found to cause up to 14 dB of attenuation at 40 GHz. Radome design also played a part in how much attenuation was seen in heavy rainfall, with water flowing around the central ``spike'' of a rounded radome, rather than uniformly on a flat-faced radome. The authors also tested how the plasma generated by a large fire affected transmissions.

G. Xylomenos and G. C. Polyzos. TCP and UDP performance over a wireless LAN. 1999. in Proc. IEEE INFOCOM, pages 439--446.
Describes experiments with Lucent WaveLAN cards indoors. The authors vary packet size and use TCP and UDP.

G. Gaertner and E. O. Nuallain. Characterizing Wideband Signal Envelope Fading in Urban Microcells Using the Rice and Nakagami Distributions. 2007. IEEE Transactions on Vehicular Technology, 56(6):3621--3630.
The authors point out that the well known Rice and Nakagami fading models apply only to narrowband signals, whereas many well-known systems today such as UMTS, 802.11, and WiMax, are wideband. Using a wideband propagation model by Yan and Kozono, they generate a large quantity of wideband test data. They then show that the CDF and PDF for the small-scale wideband fading data generated can be approximated well using the Rice and Nakagami models, using suitable parameters of the Rician K-factor or the Nakagami m (shape) factor. They provide functions to convert from the path length spread and ratio of LOS path power to other paths' powers to the K factor, approximated by means of a Chebyshev polynomial. The authors evaluate how comparable the Rice and Nakagami distributions using these parameters are, compared to the data, finding good agreement for microcellular environments. They then apply the model to the NS-2 simulator, and compare the bit error probability of IEEE 802.11b using their model (which implies significantly less small-scale fading), to that when a Rice model of a narrowband channel is used. The BEP for the wideband model is three orders of magnitude less. A good paper: well explained, many references.

Mai Tran and George Zaggoulos and Andrew Nix and Angela Doufexi. Mobile WiMAX: Performance Analysis and Comparison with Experimental Results. 2008. in Proc. IEEE VTC, pages 1--5.
Describes a simulator to examine the various modulation/coding schemes used for Mobile WiMAX. The authors find that a throughput of over 5 Mbit/s is possible, when using the the 3GPP channel model. Street level range is found to be between 300 and 2100 metres, depending on allowed EIRP. The authors validate their simulation by conducting a drive-by test in a city environment, finding that the error rates predicted by the simulations match well with those obtained in the experiments, except at high SNRs.

Joseph Camp and Edward Knightly. Modulation rate adaptation in urban and vehicular environments: cross-layer implementation and experimental evaluation. 2008. in Proc. ACM MobiCom, pages 315--326.
Investigation into how different rate selection algorithms perform under a variety of channel conditions. The authors use Rice University's WARP platform, in conjunction with a channel emulator, to evaluate the throughputs achieved. The channel conditions are obtained on a per-packet basis, and the optimal rate for that packet found by exhaustive search. This is then compared to the rate that was selected by the rate selection algorithm in use. The authors find that rate selection algorithms using SNR, as well as those measuring numbers of consecutive packets lost, both mis-estimate the channel when coherence times are low. However, SNR overestimates the rate, whilst consecutive packet algorithms underestimate, both resulting in decreased throughputs, but for opposite reasons. If SNR-based algorithms are trained in a low coherence time environment, their propensity to overselect decreases. Of course, at higher coherence times they then underselect, but the authors point out that low coherence times are the norm in a vehicular environment (and provide mappings of vehicular speed to coherence time). The authors then add in multipath using the channel emulator, and find that SNR-based algorithms' sensitivity to coherence time is increased, thus increasing the value of training further. Adding in interference resulted in packet loss-based mechanisms to underestimate the channel rate, whilst SNR-based ones did not. The authors then examine what takes place when there are two competing transmitters on the same channel. If there are only slight differences in the received channel powers, (and yet both are sufficiently high for the same modulation scheme to be used), the difference in throughput can be significant. Following on from this, evaluations are performed in a real urban environment with static and mobile nodes. In the static case, channel measurements reveal that with other moving vehicles present, the coherence time can be as low as 300 $mu$s, which is equivalent to vehicular speeds of 250 km/h. They then measure the throughputs achievable by the different rate-adaptation algorithms in both urban and downtown environments. SNR-based selection algorithms perform well in an urban scenario (less traffic), but in the downtown scenario, more vehicles meant that SNR-based algorithms tended to overselect, and packet loss-based algorithms underselected. Multipath is found to be significant, in that the downtown scenario had a 10 dB higher SNR, and yet performed more poorly than the urban scenario. With mobility, SNR-based algorithms adapted better than loss-based. With interference in a mobile scenario, loss-based algorithms tend to underselect, whilst SNR-based select accurately.

Pravin Shankar and Tamer Nadeem and Justinian Rosca and Liviu Iftode. CARS: Context-Aware Rate Selection for Vehicular Networks. 2008. in Proc. IEEE ICNP, pages 1--12.
Attempts to optimise intervehicular communication (or vehicle to infrastructure communication) by selecting the rate at which the wireless card is configured to transmit at. This rate selection is done on the basis of speed and (GPS) distance from the receiver. Such context information is weighted along with an exponentially weighted moving average of the previous packet error rate in order to ascertain an appropriate bit rate. This method therefore does not necessarily require the link to not be idle, as packets are not strictly necessary in order to evaluate the qualty of the link. The method is compared using real traces, and simulated further in a micro-traffic simulator built by the authors. The scheme is compared to another algorithm for rate selection, with the new algorithm performing well.

Chunsheng Xin and Bo Xie and Chien-Chung Shen. A Novel Layered Graph Model for Topology Formation and Routing in Dynamic Spectrum Access Networks. 2005. in Proc. IEEE DySPAN, pages 308--317.
Calculating the path through a network that utilises dynamic spectrum access is difficult because each hop may need to use a different channel. In this paper, each channel is given a layer in a graph, and each node that is able to use the channel has a subordinate node placed in that layer. The layers are interconnected by vertical edges to allow transitions between channels, where possible, at each node. Horizontal connections between nodes in each layer represent connections between nodes. Shortest path routing is then used to find the channel assignments to be used. Simulations show that the algorithm performs better in terms of overall packet delivery rate than a simple sequential channel assignment algorithm. The simulations concern randomly distributed nodes, in a 1x1 area, with uniformly distributed transmission radii. The range of radii is varied, as is the number of nodes (15 and 30). Node mobility is not considered. Also, there appears to be the possibility of negative loops in the graph, which is not addressed by the authors. It is not clear whether the approach provides the optimal channel assignment, or simply empha channel assignment.


RF-based Positioning

Andrew M. Ladd and Kostas E. Bekris and Algis Rudys and Lydia E. Kavraki and Dan S. Wallach and Guillaume Marceau. Robotics-based location sensing using wireless ethernet. 2002. in Proc. ACM MobiCom, pages 227--238.
Experiments using standard 802.11b wireless access points in order to derive location in several corridors of an office block. Graphs are presented comparing the actual location to the inferred location using Bayesian inference for static location, and a method using sensor fusion for mobile tracking. The Bayesian inference method is essentially fingerprinting, where data is collected on the number of times an AP is seen at a particular location, and the distribution of its signal strengths. The sensor fusion technique involves a hidden Markov model, interpolating the position from previous measurements from the inference engine. Results are mixed, with sensor fusion sometimes an advantage and at other times increasing error. The authors note that RF strength is dependent on random noise that is not Gaussianly distributed, and that it is dependent on receiver orientation. No indication is given of how the 'true' position was derived. An accuracy of 1.5 m 85 p.c. of the time is claimed, given a 'suitable' base station layout.

A. LaMarca and Y. Chawathe and S. Consolvo and J. Hightower and I. Smith and J. Scott and J. Howard and J. Hughes and F. Potter and J. Tabert and P. Powledge and G. Borriello and B. Schilit. Place Lab: Device Positioning Using Radio Beacons in the Wild. 2005. in Proc. IEEE PerCom, pages 116--133.
Describes the initial implementation of Place Lab, a project aiming to estimate the location of a user utilising solely beacons emitted by wireless base stations (be they GSM or 802.11b/g). The authors firstly experimentally determine the proportion of typical users' days that are spent within the coverage of GPS, GSM, and 802.11b/g. The conclude that (unsurprisingly) users are in the coverage of GSM almost 100 p.c. of the time, with 802.11b/g also performing well, both in urban areas. A database of locations where beacons are heard is maintained centrally. Location estimates are made using a Bayesian particle filter. Accuracy was found to be of the order of 20 m for 802.11b/g, 100 to 200 m for GSM, and 20 to 30 m for GSM combined with 802.11b/g. All of these are median errors: a graph of the error with error bars that span 50 p.c. of the values suggests that errors can be far higher. The authors comment that in their survey they did not encounter many fixed Bluetooth devices, and hence have elected not to attempt to use it for positioning.

Jeffrey Hightower and Anthony LaMarca and Ian Smith. Practical Lessons from Place Lab. 2006. IEEE Pervasive Computing, 5(3):32--39.
Describes how many of the aspects of Place Lab were instrumental in its success. In particular, keeping the system simple proved very important. In addition, using the very simplistic centroid averaging algorithm turned out to be a very efficient way of calculating location. Allowing the software to run on commodity hardware worked well because of its availability. Building in privacy from the start was also important in user acceptance. Various other lessons were learnt from the user studies, particularly that to provide useful location systems does not require the problem of specifying locations using ontologies to be solved.

M. Rabinowitz and J.J. Spilker Jr. A new positioning system using television synchronization signals. 2005. IEEE Transactions on Broadcasting, 51(1):51--61.
Describes an algorithm (used by Rosum, Inc.) to perform position estimation using broadcast TV signals. This requires at least 3 transmitters to be in reception range, and a timing station (as not all TV station signals are synchronised with each other) for the area. Indoor performance is good, as TV signals are high power and wide bandwidth, as well as on low frequencies. Positioning accuracy varies, but can be as good as 5 metres within a 95 p.c. confidence interval outdoors, and between 9 and 40 metres for 95 p.c. confidence interval indoors, depnding on the situation. The authors take the view that a great enough number of TV stations are normally visible in cities, and prove this for San Francisco.

Paramvir Bahl and Venkata N. Padmanabhan. RADAR: An In-Building RF-Based User Location and Tracking System. 2000. in IEEE INFOCOM, pages 775--784.
Presents a location system based on WiFi signal strength. The authorscollect RSS readings at 70 different locations over one floor of an office building. At each location they collect 20+ readings per orientation (four orientations). The RSS values for each AP in range at that location and orientation are averaged. In the testing phase, one of the 70 points is picked out, and the closest match found, thus providing a location. Next, $k$ nearest neighbour weighting is used, to provide a position that depends on more than one point. However, this tends to utilise all the points at different orientations, but the same location. For each location, the maximum value of RSS over all orientations is noted. These resulting 70 points are then used with the $k$ nearest neighbour algorithm, achieving better accuracy. Mobile users are dealt with using a time-window for averaging the signals. The next stage of the work involved using a radio propagation model that took into account the number of walls that a signal from each base station would need to traverse in order to reach each one of an evenly-spaced grid of points. The 50th percentile errors in position for the empirical method and the propagation model method were 2.94 and 4.30 metres respectively.

Y. Cheng and Y. Chawathe and A. LaMarca and J. Krumm. Accuracy Characterization for Metropolitan-scale Wi-Fi Localization. 2005. in Proc. ACM MobiSys, pages 233--245.
Describes three positioning algorithms for use in WiFi positioning systems (Place Lab in this case), and proceeds to evaluate each in three different outdoor areas. The centroid approach simply takes the arithmetic mean of all samples for a particular AP to then estimate that AP's position. When then attempting to position a user, the RSS values heard from each AP can be used to position the user 'in the middle' of all APs, assuming that the higher the RSS the closer to the AP the user is. Evidently this works best when there are two or more APs, as otherwise the user could be anywhere on an annulus centred on one AP. Fingerprinting is another technique as used in the RADAR system, where the RSS values seen by the user (the fingerprint) are compared to fingerprints stored in a database. The k nearest (in terms of euclidean distance in terms of RSS value) fingerprints are found and their positions averaged to obtain the position of the user. Finally, particle filters can be used, where the sensor model is defined by the points in the database that express how RSS varies for each AP with distance. The motion model is taken to be random walk in this work. Data was collected in three neighbourhoods by war-driving (single drives down most roads). Positions of sample points were obtained using the GPS. Another set of drives were then conducted with the GPS receiver now acting as ground truth compared to the positions given by Place Lab. The authors examine the effects of AP turnover, noisy GPS data, and density of APs in the map. They conclude that in dense deployments the algorithms are relatively equal in performance, whereas more complex approaches perform better in sparse deployments. Noisy GPS data affects fingerprinting techniques (where the positions of the database fingerprints are averaged together) far more than particle filter based algorithms. The authors also mention that in their view the range of RSS values at a single location for a particular AP lies within a fixed range.

Andreas Haeberlen and Eliot Flannery and Andrew M. Ladd and Algis Rudys and Dan S. Wallach and Lydia E. Kavraki. Practical robust localization over large-scale 802.11 wireless networks. 2004. in Proc. ACM MobiCom, pages 70--84.
Details experiments on ascertaining how robust a WiFi location system is in the presence of large numbers of building users, even with very little training data. The authors also note that the RSS values reported by different 802.11 transceiver cards are related to one another by simple linear functions, thus indicating that a training set produced with one set of hardware can easily be adjusted to be used with another set.

A. K. M. Mahtab Hossain and Hien Nguyen Van and Yunye Jin and Wee-Seng Soh. Indoor Localization Using Multiple Wireless Technologies. 2007. in Proc. IEEE MASS, pages 1--8.
Motivates using the difference in RSS values from APs for location, rather than raw RSS values, to avoid the dependence on the particular hardware configurations used for recording the location map. The authors promote the idea of interpolating between RSS readings to create the full RSS map. They measure the standard deviation of the variation in RSS at any point in their testbed to be 8 dB. The idea of deignating certain APs as anchors is proposed, with anchors being those whose fingerprints vary most over the location area (rather than having uniform coverage). The authors demonstrate the value of using RSS difference rather than absolute values by testing 20 locations with two devices, and comparing the measured RSS values. The trends in RSS are similar, but with a constant offset from each other. Bleutooth location using RSS difference appears to perform slightly better than WiFi with RSS difference. The authors show that hte usage of fictitious training points decreases error in location, particularly for small numbers of real samples. It remains to be seen whether this is true when the RF environment is more complex than the lecture theatre where the tests were carried out.

Roberto Battiti and Mauro Brunato and Alessandro Villani. Statistical Learning Theory for Location Fingerprinting in Wireless LANs. 2002. DIT-02-0086, Department of Information and Communication Technology, University of Trento, Italy.
Compares the accuracy and processing power required for location estimation using WiFi RSS of four approaches, namely support vector machines, nearest neighbour inverse distance weighting, maximum likelihood Bayesian estimation, and multi-layer perceptrons (neural network). In addition, the ability of each approach to classify the position into the correct room was evaluated. SVMs were found to give results comparable to nearest neighbour inverse distance weighting for room classification, whilst they outperformed all other algorithms tested for location estimation. In the SVM approach, the original idea is to find a plane that partitions a group of points in hyperspace into two groups. Hence, this can be used to classify whether or not a mobile node is in a particular room or not. This hyperplane can be further used to derive the location of the node (rather than just the room). For the Bayesian approach, either a radio model is assumed (as is knowledge of where walls are located) or else a large number of samples is collected. These are then used to build up a spatially-dependent probability distribution function for later prediction. The neural network approach takes as its inputs the RSS values from the 6 APs in the test environment. The sample points are then input to the network, and the weights adjusted in order that the network outputs the (as near to as possible) correct position.

Jeffrey Hightower and Roy Want and Gaetano Borriello. SpotON: An Indoor 3D Location Sensing Technology Based on RF Signal Strength. 2000. UW CSE, 00-02-02, Department of Computer Science and Engineering, University of Washington, Seattle, WA.
Overviews a system using active RFID tags to derive location based on signal strength. The algorithm implemented uses a hill-climbing algorithm to dettermine tag location from RSS values. An initial deployment used off-the-shelf technology, which was then replaced with units that yielded more accurate RSS readings. Accuracy is expected to be in the range of one cubic metre.

R. Battiti and A. Villani and T. Le Nhat. Neural network models for intelligent networks: deriving the location from signal patterns. 2002. in Proc. AINS.
Describes a neural network to infer position from RF signals. The network does nto require the positions of the RF base stations, or use an RF propagation model. 194 measurements are taken at different times of the day, and used as a training set. The authors point out that over-training the model results in it being less general. The neural network chosen is a 3-16-2 layer one. The average positioning error achieved is 1.91 metres. The $k$-nearest neighbours approach yields less than 1.8 metres of error (on average) with $k=6$. A comparison is made between the two methods when the variability of the RSS value at a single location is taken into account. Both perform approximately similarly.

Paul Castro and Patrick Chiu and Ted Kremenek and Richard Muntz. A Probabilistic Room Location Service for Wireless Networked Environments. 2001. in Proc. UbiComp, pages 18--34.
Describes the Nibble WiFi location system. This uses AP signal strength readings in order to distinguish between different rooms (but not more accurately than this). Success rates varied depending on which room (due to its location and size). The paper goes on to describe how such a room-level system can be used in conjunction with other sensors to achieve greater context awareness.

B. Li and J. Salter and A.G. Dempster and C. Rizos. Indoor Positioning Techniques Based on Wireless LAN. 2006. in Proc. 1st IEEE International Conference on Wireless Broadband & Ultra Wideband.
Describe a WiFi location system that utilises a fingerprint database. The databases tested include fingerprints that are generated using Kriging and Inverse Distance Weighted interpolation, in addition to the samples collected manually. The authors evaluate each fingerprint database, also testing the results based on the number of sample points in the database. They then proceed to evaluate a Bayesian approach to deriving location, noting that it has a lower 90th percentile error than the nearest neighbour approach.

Moustafa Youssef and Ashok Agrawala. The Horus WLAN Location Determination System. 2005. in Proc. ACM MobiSys.
Describes the Horus WLAN location system, a probabilistic approach to the problem. The authors note that WiFi RSS varies over time at a single location in an approximately Gaussian manner. They further note that there are small scale variations (on the order of less than a metre) of up to 10 dB. They set out to counter for these variations by assuming that if a user's location is above a particular threshold from their last location, subject to a certain parameter, and the last known speed of movement of the user, the effects of small scale variation are assumed to have come into play. These are countered by perturbing the input RSS signals, and picking the location fix that is closes to the last known position. In addition, the authors use a centre of mass technique, whereby they examine all the points in the map that have a high probability distribution for the particular vector of signal strengths under consideration, and find the centre of mass of these, as the location fix. An additional important part of the work is the idea that because an AP's RSS varies Gaussianly, such a distribution can be modelled, and hence if this is done in the training phase, in the online phase several samples can be averaged and fed into the previously calculated distribution in order to calculate location, instead of assuming that all RSS values from each AP are independently distributed. The accuracy achieved by the system is approximately 1 metre, 90 p.c. of the time, depending on the testbed used.

Kamol Kaemarungsi and Prashant Krishnamurthy. Properties of Indoor Received Signal Strength for WLAN Location Fingerprinting. 2004. in Proc. MobiQuitous, pages 14--23.
Examines the statistics of WLAN RSS in an indoor environment. Four mesaurement locations are used, and samples taken to ascertain the distribution. The authors conclude that the distribution is not Gaussian, as it is generally left-skewed. In addition, the standard deviation of RSS varies depending on signal level, with higher mean RSS values having distributions with greater standard deviations (somewhat counter-intuitive). The effect of a human in th epath between an AP and the receiver is examined, the authors concluding that the standard deviation of RSS is greater with a human present. Interference from a co-channel AP did not appear to affect the RSS distribution from either AP. Other properties are also considered. A simulation is conducted concerning how a wireless positioning algorithm would perform given real measurements, as compared to those inferred from Gaussian distributions around a smaller number of measurement points. They find that the usage of a Gaussian approximation to the RSS distribution does in fact achieve the same level of error in positioning as the ``real'' fingerprint values. Thus, whilst a Gaussian model is not entirely correct, it appears to be ``good enough''.


Spatial Databases

R. H. G"uting. GraphDB: Modeling and querying graphs in databases. 1994. in Proc. International Conference on Very Large Data Bases, pages 297--308.
Outlines a data model and query language for integrating graphs into databases. Vertices and edges are described, then paths made up of edges. The public transport network is used as an example, where the physical layout is described with vertices for intersections, and edges for links (roads, railways). Then bus/train lines can be mapped into these, by describing paths made up of edges, connecting stations that are at vertices. Timetables are placed above these, by using graphs of arrival and departure times, with edges representing transfers between different arrivals and departures, with labels of wait, change or travel. Queries can be constructed that involve regions of space (and traversing them). Rewrite and union operators are defined. At the time of writing, the system had not yet been implemented.

Ralf Hartmut G"uting. An Introduction to Spatial Database Systems. 1994. VLDB Journal, pages 357--399.
Survey of the purposes and structures of spatial database systems. Mentions the need to consider how cells are stored in databases, as ideally geographically close cells should b e near each other in the database; z-order enumeration is therefore good. Spatial indexing strcutures are outlined, such as grid files, kd-trees (results in a diagram similar to the CAN P2P network), and R-trees. The author also examines implementing spatial indexing on top of a closed DBMS, or as an integrated architecture, using an extensible DBMS. The latter is likely to be more efficient, as the former must somehow overlay itself on the standard DBMS, or integrate the results from the DBMS with a parallel spatial subsystem.

P. Rigaux and M. Scholl and L. Segoufin and S. Grumbach. Building a Constraint-Based Spatial Database System: Model, Languages, and Implementation. 2003. Information Systems, 28(6):563--595.
DEDALE uses conjunctions of linear constraints to describe regions that are stored in its database. These constraints are functions of up to two variables, that act as keys. For example, the x, y, and z positions of an object might each be functions of time. Note that this does not restrict the global dimension of the data stored, rather only the number of dimensions on which those global dimensions exist (i.e. the number of interpolated dimensions). Trajectories (paths, polylines) are also stored as conjunctions of linear constraints, rather than points. The usage of such constraints restricts the user to only using a linear interpolation scheme. It is not clear how well the system performs with sparse data points. The authors introduce a database algebra for querying the database, and rewrite rules in order to simplify queries formed in it. An SQL-like language is presented, which can be translated into the query algebra. Temporal logics are mentioned, but they are not fully implemented in DEDALE. The system has been implemented using the BASIS DBMS. The authors also explain the various geometric algorithms used to normalize queries made in the algebra, i.e. ascertain what all the constraints result in, such as the exact plane described by them. The system has a simple graphical interface, that also allows the reading in of vector-based data for import into the database (the algorithm for partioning up such data into a constraint representation by splitting it into convexes is also outlined).

Adam Etches and David Parker and Sean Ince and Philip James. UTIS (Urban Transportation Information System) a geo-spatial transport database. 2000. in Proc. International Symposium on Advances in geographic information systems, pages 83--88.
Description of an ambitious project to combine detailed road topology, public transport lines, and traffic data from SCOOT into a single database, known as UTIS. This database would then act as input to other programmes such as traffic microsimulators. The project had input a small 3 square kilometre area. The input method appeared to be using manual annotation. The project appears to no longer exist.

Shashi Shekhar and Mark Coyle and Brajesh Goyal and Duen-Ren Liu and Shyamsundar Sarkar. Data models in geographic information systems. 1997. Commun. ACM, 40(4):103--111.
Proposes a model known as the Geographical Information System Entity Relational Model (GISER) for storing field and object based data in a single DBMS. The authors motivate this approach by considering intelligent transport system queries. They also tag field based data with a 'discretised by' attribute, that details which interpolation function was used to calculate the value. The authors do not actually implement the system, and indeed raise the problem of accurately restoring field models from discretised data.

D. Papadias and J. Zhang and N. Mamoulis and Y. Tao. Query Processing in Spatial Network Databases. 2003. in Proc. VLDB, pages 790--801.
Describes the implementation details of a system to perform queries such as finding the nearest neighbour entity on a network (such as the road network). An R-tree is used to store the physical point entities (hotels, petrol stations...), and adjacency lists are used to store what road topology nodes are near to each other. Each road also has a minimum bounding rectangle for the polyline that it is represented by, to speed up queries. Once coarse matching has been performed, finer filtering can take place using the exact polyline representation. An R-tree is then used to index the polyline MBRs. Algorithms are described and their performances compared for nearest neighbour, range queries (retrieve all objects within range x), closest-pairs (e.g. find the closest combined hotel and restaurant), and e-distance joins (find all pairs within x km from here). For each class of algorithm, one based on Euclidean distance and another on network distance are presented and compared.

Max J. Egenhofer. What's special about spatial?: database requirements for vehicle navigation in geographic space. 1993. in Proc. ACM SIGMOD international conference on Management of data, pages 398--402.
Overviews the challenges of constructing spatial database systems suitable for in-car navigation. The requirements (fast access, variable levels of detail, good interface) are detailed, typical queries enumerated, and challenges (real-time access to date, and query languages and processing -- more research now done?) given.

Leonore Neugebauer. Optimization and evaluation of database queries including embedded interpolation procedures. 1991. in Proc. ACM SIGMOD international conference on Management of data, pages 118--127.
Describes extending an RDBMS with interpolation functions. SQL is extended to support nested FROM statements, which can contain a specifier for which interpolation function to use (written in C, by the user), the bounds for the interpolation region, and the step size, dictating how many values are desired to be calculated in the region. Optimisation strategies are discussed for making this process more efficient. Essentially, interpolation is performed on the dataset, the results of which are then stored back, after which the ``normal'' part of the query can be run.

A. Prasad Sistla and Ouri Wolfson and Sam Chamberlain and Son Dao. Modeling and Querying Moving Objects. 1997. in Proc. International Conference on Data Engineering, pages 422--432.
A data model called Moving Objects Spatio-Temporal is proposed. Queries can be instantaneous (i.e. ``normal'' DBMS queries), continuous, or persistent. Continuous queries are those which a user may want to re-execute every clock tick to gain results which were not available previously. To avoid continuous evaluation of the query by the server, the first execution of the query returns tuples that contain the results that are valid for a certain amount of time. This then means that the entity querying can simply decide whether there will be any new information to receive, and if not, it need not contact the server. The query answer may need to be updated if parameter it depends on (such as velocity) changes. A persistent query is a series of instantaneous queries, all performed on the same time t, but at different times after it, i.e. t'>t, t''>t' etc.. This then shows how the parameter has evolved over time, as a continuous query would only reflect future values based on the current equation for the attribute under consideration. A method of indexing dynamic attributes is proposed, where if one imagines a graph of the value of an attribute plotted over time, one could use an R-tree to spatially index the data (record IDs) on the basis of time and value being the two dimensions. The authors then say that for variation in 2-D one could add a dimension to the R-tree. The index needs to be periodically reconstructed. At the time of writing the system had not been implemented.

A. Guttman. R-trees: a dynamic index structure for spatial searching. 1984. in Proc. ACM SIGMOD, pages 47--57. ACM Press.

Donald Shepard. A two-dimensional interpolation function for irregularly-spaced data. 1968. in Proc. 23rd ACM national conference, pages 517--524.
Describes a simple nearest neighbour interpolation algorithm, which is then developed to deal with the problems of selecting only a subset of points that are relatively close by, including the direction of the points (to allow for 'shadowing' of each other), noting the gradient of the signal, and allowing for machine epsilon approximations. The author also mentions that barriers can be easily simulated to then cause the estimated distance to data points to be increased from the simple Euclidean case. A good paper.

Shashi Shekhar and Sanjay Chawla and Siva Ravada and Andrew Fetterer and Xuan Liu and Chang-tien Lu. Spatial Databases: Accomplishments and Research Needs. 1999. Knowledge and Data Engineering, 11(1):45-55.
Surveys the field of spatial databases, and outlines the need for more work on field and network data. Query processing (cost models, bulk load) also requires research. The authors note that in spatial query processing the order of query operations such as selection and projection may be different from non-spatial databases, as some operations such as 'contained in' may be computationally expensive, and therefore should be performed only after initial selection, rather than in a prior projection.

Hongliang Gu, Wei Wei, Wei Wang, Xiaoyong Guo, Jun Liang, Liping Shao, Jifeng Chen, Na Ye. ASMod: A Core Model Supporting Location-aware Computing in Smart Classroom. 2006. International Journal of Computer Science and Network Security, 6(3A):161--168.
Describes the use of R-trees to represent a space, and objects within in it that are regular and are aligned to the space's axes. For objects that do not satisfy these constraints, the minimum bounding box is stored in the R-tree, and a pointer to a Quadtree that then better specifies the object is stored. A quadtree partitions space into quarters, and progressively partitions these quarters as necessary in order to achieve an accurate representation of the object, up to a certain specified tolerance. Note that not all the quarters need be partitioned to the same degree. This structure is used for smart location in a classroom context.

Timos Sellis. CHOROCHRONOS: Research on Spatio-Temporal Database Systems. 1999. in Proc. Integrated Spatial Databases. Digital Images and GIS: International Workshop, pages 308--316.
Overviews the European CHOROCHRONOS project, which is predominantly concerned with databases that do not treat time as simply another dimension, but rather exploit its unique properties as compared to space, to allow the efficient tracking and representation (e.g. of velocity and acceleration) of moving objects.

Dieter Pfoser and Nectaria Tryfona. Capturing Fuzziness and Uncertainty of Spatiotemporal Objects. 2001. in Proc. Advances in Databases and Information Systems : 5th East European Conference, pages 112--126.
In both time and space there is a distinction between accuracy and precision. Accuracy reflects the magnitude of the error in a measurement, whilst precision indicates how absolute a perfectly accurate (zero error) measurement could be. For some phenomena there is absolute precision, for example, the position of a point in a grid where the position could only be at one of the gridline intersection points. In contrast, applications such as terrain characterisation, where one area could be classed as desert and another as savannah, there may be no clear boundary line. This is where fuzzy set theory can be used, in order to provide a probability mass function that specifies how each point in the fuzzy region should be treated (i.e. with what probability it is within, out outside, a particular region). This type of uncertainty can be similarly applied for time, with both instants in time and periods (defined as being between two instances), being fuzzy. The paper presents a simple mathematical model of the concepts, but leaves implementation (which the authors note will require more complex mathematics) for further work.

S. Saltenis and C. Jensen. R-tree based Indexing of General Spatiotemporal Data. 1999. CH-99-18, CHOROCHRONOS: TMR Research Network Project.
Describes the distinction between transaction times and validity times. Transaction times are those associated with the actual event or period in question, such as a piece of land being owned by a particular person between the years 1999 and 2000. Validity times are metadata about the database record itself, i.e. when did this information get input into the database, and for how long (until when) was it current (not superceded by anything else, e.g. in 2002 we found out that the land was in fact owned by someone else between 1999 and 2000, and hence create a new database record that supercedes the first one). The authors also mention the concept of now-relative times, where an end time might be ongoing, and therefore should be set to 'now', or some offset (e.g. we might have an address that one most update the database of any change to within two months of the change. Hence we could say that the validity of the record is now -2 months, as we are uncertain of the current state (though it could be the same one). The authors go on to introduce the R-ST-tree for storing spatiotemporal data, based on the GR-tree and the R*-tree, but with the new idea that the bounding box might need to be stair-shaped, rather than rectangular, in order to make it a better fit. These bounding boxes need to change in size as as time changes, as they may contain data that has end times of 'now' (or some offset). The paper goes on to describe the various algorithms necessary for maintaining the structure.

M. Mainguenaud. Modelling the Network Component of Geographical Information Systems. 1995. International Journal of Geographical Information Systems, 9(6):575--593.
Describes the use of an object-oriented database model in representing network-related data. The author describes graphs as collections of nodes and edges, with these potentially posessing sub-graphs. They key point made is that routing can be done at different levels of abstraction, with much of the route conducted at a high-level, and only at the end points need the detailed sub-graphs be consulted.

M. Scholl and A. Voisard. Object-Oriented Database Systems for Geographic Applications: An Experiment with O2. 1992. in The O2 Book, pages 585--618. Morgan Kaufmann.
High-level overview of how a spatial database was implemented on the object-oriented O2 database system. The authors describe design decisions, and treat the problem from a theoretical perspective. In-depth implementation details are not given.

Andrew U. Frank. 1998. Chapter in Spatial and Temporal Reasoning in Geographic Information Systems. Oxford University Press.
Describes various different concepts of time. These can be divided in linear and cyclic time, and also classified in whether they can be totally or partially ordered (sometimes it is not possible to compare two people's notion of time, only the ordering within each person's notion). Branching time should also be considered, where there may be multipled futures or pasts. Finally, there may be multiple perspectives on an event(transactional time), where the state of an object at a time x as recorded at time y may be different for the same time x as recorded/ammended in the database at a later time z.

Miguel Angermann and Jens Kammann and Patrick Robertson and Alexander Steingass and Thomas Strang. Software Representation for Heterogeneous Location Data Sources Using Probability Density Functions. 2001. in International Symposium on Location Based Services for Cellular Users -- Locellus 2001, pages 107--118.
Paper on location probability, that includes an explanation of WGS84 co-ordinates, and their conversion to Cartesian ones.

Ordnance Survey. Guide to Co-ordinate Systems. 2006. Ordnance Survey Great Britain.
The earth is a non-smooth (i.e. approximate) ellipsoid. Various ellipsoid models exist for modelling it, such as the GRS80 ellipsoid used by GPS. The Airy 1830 ellipsoid fits Brtain's topology better, and is therefore the OS reference. Positions on an ellipsoid are defined in terms of known points, i.e. a datum, and calculated from a Terrestrial Reference Frame (TRF), i.e. a set of reference points in the datum. A datum by definition is exact as it concerns several set points. The calculation of where a point is in that datum requires measurement, which is subject to error. Hence, in the creation of known points in a TRF, errors are introduced. To convert between one co-ordinate system and another requires the positions of at least 3 points in the two TRFs to be compared, and the average transformation factor calculated. This then means that the transformation factor is inexact. Therefore, translation between co-ordinate systems is only as exact at the measurements in the two TRFs. Various datums are in use, e.g. WGS84 for GPS, and OSGB36 for the UK. Conversion between datums is carried out using a Helmert transform, which consists of a function for translating, rotating, and scaling the TRF (hence 7 parameters). Note that before utilising a Helmert transform, the co-ordinates must be in Cartesian space, and therefore polar co-ordinates such as latitude, longitude will need to be converted using trigonometry. For measuring height, we must define a level surface over the earth. This is known as the Geoid, and is an irregular surface that depends on local gravitational variations. Evidently we can only model the earth's true Geoid, and hence we have a true global Geoid, and a local model Geoid. Evidently the surfaces of the ellipse in use and the Geoid will be separated by a varying distance. Orthometric height is the height above the true Geoid, whilst local orthometric height is that for the local Geoid model. In contrast, heights measured above sea-level are based on the mean sea level (MSL) experienced at a reference point, over a defined period. MSL is not constant over the globe, and hence heights above MSL vary by country depending on the datum (reference point) used. MSL also varies over time, and hence the MSL used for the UK is no longer the actual MSL.


Filtering & Smoothing

Ken Turkowski. Filters for Common Resampling Tasks. 1990. Apple Computer.
Brief overview of different types of filters for interpolation and decimation (i.e. sampling). The paper details the basics of Gaussian filters, as well as box and tent filters. The phase of a filter is the positions of where the samples are taken, i.e. with phase 0 the samples are the same positions as the point under consideration, whilst with phase 1/2, they are halfway between the existing points. The key is to choose a filter function where the gain in the passband (the frequencies we want to sample at) is high, whilst the gain in the stopband (frequencies we wish to drop) is low. Many filters, such as the Gaussian one, have low stopband response, but at the price of passband response dropping off at the edges. This causes loss of sharpness in the resulting image. Gain in the stopband results in jaggedness and aliasing. Sinc-based filters, such as the Lanczos filter family have very good passband and stopband performance, and are the filter of choice in graphics applications. The 2-lobe Lanczos filter preserves a single side lobe on either side of the main lobe, and the 3-lobe similarly with two lobes on either side.

Amara Graps. An Introduction to Wavelets. 1995. IEEE Computational Science and Engineering, vol 2.
Good introduction to wavelets. Describes how wavelets have compact support, i.e. their value is zero outside of a particular finite interval. This is in contrast to Fourier analysis, where the function is periodic to infinity. Mother wavelets can be scaled (width) to generate daughters, that form wavelet families. The position of the daughter can also be varied. This then allows for very efficient compression using wavelets. The more coefficients are given, the greater the resolution of the resulting compressed image. Different wavelet families are useful for different applications, as each mother wavelet has a distinct shape. Wavelet packets are particular linear combinations of wavelets. The fast wavelet transform exists in a similar way to the fast Fourier transform. Various applications of wavelets (music, fingerprints, imaging) are explained.

Wen-Yen Wu. A Dynamic Method for Dominant Point Detection. 2003. Graphical Models, 64:304--315.
Describes how to detect points on a curve (whose equation is not specified) that are the most significant (dominant) ones. First, the Freeman chain code is calculated (i.e. for each consecutive pair of points forming the curve, the direction of the vector between the points is compared to a set of 8 'compass point' vectors, and the closest one found). Points on a straight line can then easily be excluded from consideration as dominant ones. The survivors are termed break points. The algorithm then finds the region of support (i.e. the region for which this break point is the most relevant point) dynamically, by seeing how the ratio of the distance of the breakpoint from the line joining the extremes of the extent varies. Above a certain threshold, the region of support is deemed to end. Next, the k-cosine value is determined for each breakpoint, and those below a certain threshold are discarded. The algorithm appears to work well, compared to the Teh-Chin method, due to its optimisation of making thelength of the region of support depend on the previous region of support.

Azriel Rosenfeld and Emily Johnston. Angle Detection on Digital Curves. 1973. IEEE Transactions on Computers, C-22:875--878.
Describes the method of detecting dominant points using corner detection. This involves calculating the k-cosine at each point (see above) and finding the region of support such that the points directly adjacent to boundary points of the region have a greater k-cosine than those bounding the region. The point within the region with a k-cosine greater than all the rest in the region is the dominant point. The maximum length over which the comparison to find the length of a region is pre-determined (e.g. number of points/10). This method has the problem that if there is a loop in the curve (i.e. it crosses itself), and the loop is triangular in shape, the dominant point will be deemed to be on a straight line, rather than at the point of maximum curvature. This does not appear to be a problem if we assume a curve that is single-valued for any one input, and has C1 continuity.

Azriel Rosenfeld and Joan S. Weszka. An Improved Method of Angle Detection on Digital Curves. 1975. IEEE Transactions on Computers, C-24(9):940--941.
(Correspondence) A development of the above paper, where the algorithm is improved by averaging the k-cosine values at each point with surrounding ones. Appears to give slightly better results.

Cho-Huak Teh and Roland T. Chin. On the Detection of Dominant Points on Digital Curves. 1989. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(8):859--872.
Surveys and experimentally compares various algorithms for finding the dominant points on a digital curve. The authors go on to present their own algorithm, which outperforms the others in many ways, and requires no input parameters. They note that it can be computationally intensive due to the requirement to find the perpendicular distance of each point to a chord. This algorithm is later built on by Wu to select regions of support where the length of each depends in part on the length of the previous.

D. H. Douglas and T. K. Peucker. Algorithms for the reduction of the number of points required to represent a line or its caricature. 1973. The Canadian Cartographer, 10(2):112--122.
Famous algorithm for line simplification. Treats all points equally. Draws a line between first and last points, finds the furthest intermediate point from that line. If it is a distance greater than epsilon away, split the line at that point, and recurse on the two portions.

Abraham Savitzky and M. J. E. Golay. Smoothing and Differentiation of Data by Simplified Least Squares Procedures. 1964. Analytical Chemistry, 36:1627--1639.
A good bibliography on dominant point detection (and similar techniques).

William H. Press and Saul A. Teukolsky and William T. Vetterling and Brian P. Flannery. 2002. Chapter in Numerical Recipes in C++. Cambridge University Press.
Describes the Savitzky-Golay smoothing algorithm and its implementation in C++. Essentially, for each input point it carries out a high-order polynomial least squares fit and assigns the corresponding output point the value of the polynomial at that $x$ co-ordinate. This then preserves the $y$ values of maxima and minima far better than a simple windowed averaging filter. The S-G algorithm allows the least-squares fits to be efficiently performed.

A. Carmona-Poyato and N.L. Fern'andez-Garc'ia and R. Medina-Carnicer and F.J. Madrid-Cuevas. Dominant Point Detection: A New Proposal. 2003. Image and Vision Computing, 23:1226--1236.
Proposes using an adaptive bending value in order to assess local curvature at each candidate point. This consists of taking the unit vectors to the preceding and proceeding candidates and averaging the magnitude of their summation, resulting in a number between 0 and 1. This ensures that both the left and right regions of support have equal effects on the bend value, rather than depending on the lengths of the vectors. Collinear points are eliminated by evaluating three different metrics which assess how far a point is from a line. These involve the sum of the squared errors divided by the compression ratio (number of candidate points divided by number of dominant points selected) raised to a certain power. The proposals are evaluated on the standard dominant point test cases. The method is claimed to be scale independent.

David H. Foster. Automatic Repeated-Loess Decomposition of Data Consisting of Sums of Oscillatory Curves. 2002. Statistics and Computing, 12:339--351.
Describes how an arbitrary curve can be automatically decomposed into representative components. First, the number of maxima in the candidate components are estimated by oversmoothing the curve, then progressively reducing the smoothing until a certain threshold of maxima is reached. This is one component, which is then subtracted from the input data. The process is repeated, until a certain threshold, where it is deemed the remaining signal is noise. The algorithm is evaluated for psychophysical data on human abilities in detecting the differences between line angles.

Herbert Freeman. Computer Processing of Line-Drawing Images. 1974. ACM Computer Surveys, 6(1):57--97.

Paul L. Rosin. Techniques for Assessing Polygonal Approximations of Curves. 1997. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(6):659--666.
Describes the different ways that algorithms that attempt to approximate polygonal shapes can be compared against each other. Measures such as integral square error (ISE) and compression ratio (CR) express how accurate and compact the representation is. However, a metric combining the two relies on all algorithms outputting an approximation polygon that has the same number of sides. To solve this, the author proposes a different metric, where the output of an algorithm is compared to the optimal polygon that approximates the test shape, found by dynamic programming. The error and compactness of the algorithm's output compared to the optimal representation are compared. The author uses Teh and Chin's curve to evaluate 23 algorithms using this criteria, finding that Lowe's algorithm performs excellently, followed by Barnerjee's.

Daniel Casta~no and Angela Kunoth. Adaptive Fitting of Scattered Data by Spline-wavelets. 2003. in Curves and Surfaces, pages 65--78. Vanderbilt University Press, Nashville, TN, USA.
Describes a method to represent a set of irregularly distributed points with B-Spline-(pre)wavelets. The data is not required to be in a regularly spaced grid. The fitting minimises the least-squares error between the fit and all points.

G. Gallus and P. W. Neurath. Improved Computer Chromosome Analysis Incorporating Preprocessing and Boundary Analysis. 1970. Physics in Medicine and Biology, 15(3):435--445.
Describes a method to detect the arms/tips and constriction points, as well as the centromeres of chromosomes, in order to aid their measurement. The images are first preprocessed to remove noise. Then, the boundary points of the raster image are analysed, first generating their chain code. The chain code is then converted into an N-chain, each value of which indicates the size of the angle between the lines to the points up to the full length of the curve to both sides of the point in question (noting that the curve wraps on to itself). The N-chain is further processed using prefix sums (but breaking when the n-code changes sign), to generate an S-chain. The larger the value of an element in the S-chain, the more likely it is to be considered a point of high curvature (dominant point). The threshold for this is dependent on how far through the chain the value lies, as with prefix sums clearly indexes further along will have higher prefix sums. The thresholds given were empirically defined, and it is not clear how to extend their definitions to curves made up of a greater number of points. Chromosome-specific heuristics are detailed, and constriction points are localised.

Majed Marji and Reinhard Klette and Pepe Siy. Corner Detection and Curve Partitioning Using Arc-Chord Distance. 2004. in Proc. IWCIA, pages 512--521.
A development of Phillips & Rosenfeld's algorithm for corner detection, that is more tolerant to an incorrectly set $k$ value (which determines the scale of the features that are to be detected). The authors note that noting the sign of the distance from the chord to a point is important in ensuring that (e.g.) convex curves are not masked by nearby concave ones. This distance is a point's significance. Local maxima are detected by finding the windows around each point where the significances of all points in the window are strictly decreasing. Then, points with a significance above the average are picked as maximum points. A further process identifies other maximum points. Plateaux are also considered in the new algorithm. The authors conclude by showing that their algorithm identifies a more optimal set of points as compared to Phillips & Rosenfeld's when $k=6$.

T. Kadonaga and K. Abe. Comparison of Methods for Detecting Corner Points from Digital Curves. 1995. in Proc. 1st International Workshop on Graphics Recognition: Methods and Applications, pages 23--34.
Compares 11 different corner detection algorithms on several different curves. In particular, the invariance of the set of corner points detected over the test curve being transformed by scaling, rotation, and reflection are assessed. In terms of invariance, the n-code method performed best, with the Teh-Chin method performing poorly. The authors also evaluated how the set of corner points picked by each algorithm compared to those picked by human test subjects, finding that the Rosenfeld-Weszka method performs best, followed by n-codes.

David G. Lowe. 1985. Chapter in Perceptual Organization and Visual Recognition. Kluwer Academic Publishers.
Describes the problem of line segmentation, where a curve must be represented by straight lines in the most efficient way, but whilst (evidently) maintaining the same visual perception. The issue of what scale features should be searched for is also significant. The author proposes an algorithm that examines the source curve at a variety of scales (differing by factors of 2), and proposes multiple curves of different scales as representations. These are filtered by means of comparing each of them toall points on the curve, in order to ascertain their significance.

Peter Craven and Grace Wahba. Smoothing noisy data with spline functions . 1978. Numerische Mathematik, 31(4):377--403.
Presents a method by which a smooth curve that has been perturbed by white noise can be approximated with $n$ piecewise shifted Bernoulli polynomials with one knot, plus $m+1$ polynomials of degree less than $m$. The authors mathematically derive their method, and proceed to demonstrate it in action on several test sets of data.

Paul S. Heckert and Michael Garland. Survey of Polygonal Surface Simplification Techniques. 1997. in Proc. 24th International Conference on Computer Graphics and Interactive Techniques.
Contains a short overview of curve simplification algorithms.

Eamonn Keogh and Selina Chu and David Hart and Michael Pazzani. An Online Algorithm for Segmenting Time Series. 2001. in Proc. IEEE ICDM, pages 289--296.
Surveys existing line segmentation algorithms, and categorises them as sliding windows (where a segment is grown until it exceeds some error bound), top-down (where one segment from the start to the end point in the series is recursively partitioned until a particular error bound is achieved), and bottom-up (where a line segment is drawn between each point and its neighbour, and these are combined together until a particular error threshold is exceeded). All the approaches are evaluated on a variety of test data sets, and the sliding window approach is found to be the worst, and bottom up the best. The authors then propose a hybrid algorithm of the two in order to achieve good accuracy and the ability to work with online (rather than batch) data. They evaluate this in the same way, and find it to work almost as well as the bottom up approach.


Miscellaneous

I. Sommerville. Software Engineering. 1995. Addison-Wesley.

D. Comer and D. Stevens. Internetworking with TCP/IP Volume 2. 1995. Addison-Wesley.

G. Fairhurst and L. Wood. Advice to link designers on link Automatic Repeat reQuest (ARQ). 2002. RFC, 3366, IETF.

W. Odom. Cisco CCNA, Exam #640-507 Certification Guide. 2001. Cisco Press.

Hedrick. Routing Information Protocol. 2001. RFC, 1058, IETF.

David D. Clark and Craig Partridge and Robert T. Braden and Bruce Davie and Sally Floyd and Van Jacobson and Dina Katabi and Greg Minshall and K. K. Ramakrishnan and Timothy Roscoe and Ion Stoica and John Wroclawski and Lixia Zhang. Making the world (of communications) a different place. 2005. SIGCOMM Comput. Commun. Rev., 35(3):91--96.
The first report of a group set up to examine the future of end to end communications. Makes various predictions as to what state communications technology will be in within 10 years, and describes the research projects necessary to allow such communications technologies to function well. Includes (among others), quantum networking, power consumption, and location awareness.

E. Lua and J. Crowcroft and M. Pias and R. Sharma and S. Lim. A Survey and Comparison of Peer-to-Peer Overlay Network Schemes. 2004. IEEE Communications Survey and Tutorial.
Overviews the various prominent structured and unstructured P2P systems. Those based on DHTs include Chord, CAN, Pastry, Tapestry, Kademlia, and Viceroy. Unstructured systems described are Gnutella, Freenet, KaZaA, EDonkey, and BitTorrent. The authors compare the strategies in each group, and suggest what further work might be done to increase security and robustness. In particular scalability and the usage of ecnonomic incentives are singled out as key issues.

Maria Papadopouli and Henning Schulzrinne. Effects of power conservation, wireless coverage and cooperation on data dissemination among mobile devices. 2001. in Proc. MobiHoc, pages 117--127. ACM Press.
Describes the 7DS peer to peer data sharing system. This is designed around the paradigm of not having ubiquitous network access. Instead, data is prefetched and cached by peers that are mobile. Simulations are performed using mobile nodes at pedestrian speeds with a random waypoint model, with mobile and fixed servers, and in fully peer to peer and highly asymmetric server to client modes. The results appear to indicate that the degree of co-operation and the rate of mobility affect the percentage of nodes that gain a particular item of content after a certain time, as does the range of transmission, and that a peer to peer system performs better than a single server with multiple clients randomly coming into contact with it.

V. Jacobson. Congestion avoidance and control. 1988. in Proc. SIGCOMM, pages 314--329. ACM Press.
Describes the proposal for TCP to implement slow start and the linear increase multiplicative decrease congestion avoidance strategy. The author also identifies the need for a type of congestion control on the gateway side, and not only at the end points.

S. McCreary and K. Claffy. Trends in wide area IP traffic patterns -- A view from AMES Internet Exchange. 2000. Tech. Rep., CAIDA.

T. H. Cormen and C. D. Leiserson and R.L. Rivest. Introduction to algorithms. 1990. MIT Press.

Deborah T. Marrand Frank Binns and David L. Hill and Glenn Hinton and David A. Koufaty and J. Alan Miller and Michael Upton. Hyper-Threading Technology Architecture and Microarchitecture. 2002. Intel Technology Journal, vol 6.
Hyperthreading makes one physical processor appear as two logical processors to the OS. This allows multiple threads to run in simultanous multithreading mode, as opposed to thread switching by time slicing. The mechanism employed involves duplicating the architectural state inside the physical processor. This allows all other functional units to be shared, but a swap between two threads is instantaneous, i.e. there is no context switch overhead. The extra transistors required are less than 5 p.c. of the total original number. Each logical processor appears as a full processor to the OS, and has its own individually addressable interrupt controller (APIC).

J. H. Saltzer and D. P. Reed and D. D. Clark. End-to-End Arguments in System Design. 1984. ACM Transactions on Computer Systems, 2(4):277--288.
Where to place a particular function in a stack of software, particularly a communications system stack, is a question of system design. The authors of this paper argue that any functionality that is required by an application is probably best implemented at that application's layer. An example is reiable transmission: where should the reliability check be performed? At the network layer, we can confirm that data has traversed the network correctly, but we cannot know whether it has been corrupted by one of the end systems before reaching the relevant application. Hence, we end up re-implementing the functionality in the application layer in any case. Of course, this is not an argument for all functionality to be implemented at higher layers: sometimes it is more efficient to implement error checking at a low layer, if there are likely to be many errors at that layer. Hence, the end-to-end argument is not a rule to be followed blindly. It is simply an encouragement to think carefully about where functionality should be placed in a system.

M. Pias and J. Crowcroft and S. Wilbur and T. Harris and S. Bhatti. Lighthouses for Scalable Distributed Location. 2003. in 2nd International Workshop on Peer-to-Peer Systems (IPTPS '03).

H. Zimmermann. 1988. Chapter in Innovations in Internetworking. Artech House, Inc..

Robert W. Scheifler and Jim Gettys. The X window system. 1986. ACM Transactions on Graphics, 5(2):79--109.

Michael Mitzenmachert. Digital Fountains: A Survey and Look Forward. 2004. in Proc. Information Theory Workshop.

Thomas Stockhammer and Miska M. Hannuksela and Thomas Wiegand. H.264/AVC in Wireless Environments. 2003. IEEE Transactions on Circuits and Systems for Video Technology, 13(7):657--673.
Detailed paper on the coding mechanisms in H.264/AVC to deal with the errors introduced by wireless channels, and their potentially low bit rates. Various bit error patterns have been defined in the standard, that are dependent on the bit rate, the speed of the terminal, and the application (conversational or streaming). Common tests are also defined. Simulations are used to illustrate how the codec fares over a variable rate UMTS connection.

Saikat Guha and Neil Daswani and Ravi Jain. An Experimental Study of the Skype Peer-to-Peer VoIP System. 2006. Cornell University/Google.
Traffic analysis of the Skype P2P protocol. Shows a definite diurnal variation in the number of users in the network. Also shows that selecting supernodes by the amount of spare bandwidth and requiring them to have a public IP address ensures that the supernode population remains more constant than with other P2P networks, where churn is high. Load on supernodes is low, despite them relaying VoIP traffic.

Viktor Mayer-Sch"onberger. Useful Void: The Art of Forgetting in the Age of Ubiquitous Computing. 2007. RWP07-022, John F. Kennedy School of Government -- Harvard University.
Advocates the introduction of an expiry date in the metadata for digital objects. This would then be enforced by software. The author also envisages that people might carry around key rings that allow them to specify their preferred information expiry dates to others around them, such as when a photo is taken of the individual. The author considers such a move relatively easy.

Mark Weiser. The Computer for the Twenty-First Century. 1991. Scientific American, pages 94--104.
Describes the vision of ubiquitous computing, where computers vanish into the background, and are endowed with knowledge of their own, and their users', locations. Using a computer will not involve mental stress, or imply that users decrease their degree of social contact with each other, but instead will allow users to act naturally, whilst making use of technology. Tying any piece of work to a particular computer is unproductive: freeing it in order that it can be used from whatever computer you happen to be using is the solution, e.g. allowing a particular application window to be viewed on a workstation screen, and then easily transferred to a shared whiteboard in another room. Privacy and security concerns are of course important in this regard.

Andy Hopper. The Royal Society Clifford Paterson Lecture, 1999 - Sentient Computing. 2000. Philosophical Transactions of the Royal Society, London A, 358:2349--2358.
Describes how sentient computing applications are those which observe and react to the physical world in intuitive ways. The concept of programming with space is described, and sentient computing applications are outlined.

ETSI. Digital cellular telecommunications system (Phase 2+); Universal Mobile Telecommunications System (UMTS); AT command set for User Equipment (UE). 2008. TS 127.007 v. 8.3.0 R-8, ETSI/3GPP.
ETSI/3GPP specification of the AT commands that UMTS modems should/can support.

D. G. Krige. A statistical approach to some mine valuations and allied problems at the Witwatersrand. 1951. Master's thesis, University of Witwatersrand.
Proposed the technique now known as kriging for calculating where deposits of minerals might be in as yet un-mined locations.

Mashrur A. Chowdhury and Adel Sadek. Fundamentals of Intelligent Transportation Systems Planning. 2003. Artech House.
Contains a good overview of ITS in chapters 3 and 4.

Gilman Tolle and Joseph Polastre and Robert Szewczyk and David Culler and Neil Turner and Kevin Tu and Stephen Burgess and Todd Dawson and Phil Buonadonna and David Gay and Wei Hong. A Macroscope in the Redwoods. 2005. in Proc. ACM SenSys, pages 51--63.
Describes the deployment of a large number of sensor motes on a 70 metre tall redwood in California. The results show a variety of detailed information over time and height of the tree. An interesting paper on dense sensor network deployment.

David J. C. MacKay. Fountain Codes. 2004. in The IEE Seminar on Sparse-Graph Codes, pages 1-8. IEE.
An excellent introduction to fountain codes. The basic concept is to take the data to be transmitted, and instead of dividing it into blocks, generating an (potentially infinite) sequence of blocks, each of which is dependent on the entire file, or multiple portions of it. These blocks are transmitted, and provided the receiver obtains a large enough (but not necessarily contiguous) subset of them, they can recover the original file. The size of this subset does not have to be significantly greater than the number of blocks that the file could have originally been split into, were a normal packet-based transmission scheme employed. The author first describes the simplest type of fountain code, which uses a random linear encoder. Here, a random selection of the data to be sent's blocks are XORed together to form each transmitted packet. Provided that the pseudo-random sequence of packets that are XORed together is known by the receiver (one way would be to transmit the seed of the pseudo-random number generator in the packet header) then the receiver is able to decode the message after a certain number of packets have been received. The overhead in terms of number of extra packets is small, and compared to the file size decreases as the file size increases. This mechanism has a high encoding and decoding overhead (quadratic and cubic in the number of blocks in the data to be sent respectively), so instead the LT code is proposed, where lower coding complexity is achieved by having some transmitted packets only depend on one original data packet. These can be used to start the decoding process off. Finally, the paper describes Raptor codes, which decrease the complexity to linear in the original data size. These encode the original data with a non-sparse (unlike LT codes) scheme, where any packets can be lost, provided they constitute a small enough fraction of the total, and the data can still be inferred (like the linear random fountain code achieves). The resulting packets are then encoded using the LT method, which is a sparse encoder, and thus may have input packets that end up not being used to generate any output packets. This is fine, as this is countered by the non-sparse encoding being able to cope with a few lost packets. The end result is a very efficient encoding/decoding operation, whilst the data is still received correctly, for any random subset of received packets, in number above a certain percentage of the total transmitted.

Sachin Katti and Hariharan Rahul and Wenjun Hu and Dina Katabi and Muriel M'edard and Jon Crowcroft. XORs in the air: practical wireless network coding. 2006. SIGCOMM Comput. Commun. Rev., 36(4):243--254.
Describes a testbed for evaluating the effect of network coding on network capacity. Here, packets sent between two nodes via a common relay can be XORed together by the relay, and the result broadcasted out. The two nodes can then XOR the received packet with their transmitted packet, to obtain the packet destined for them. This technique can be used with larger number of packets, thereby decreasing the number of transmissions a relay node must make, and hence increasing network capacity. The authors describe how they have implemented a 20 node testbed by including a network coding shim layer between the transport and MAC layers. In the implementation, nodes retain a packet pool of packets they have overheard on the air. This then allows them to decode later packets that are composed of other data XORed with already received packets. Neighbours advertise to each other what packets are in their packet pools. In addition, nodes guess what packets are likely to be in another's packet pool by examining the routing topology, and thus deducing what transmissions the other node will have overheard. In order to prevent TCP packet re-ordering, packets on each TCP stream are coded in order, and in addition each node has a TCP re-ordering module to try to ensure that out of order packets received within a certain time span are re-ordered to be the order expected by TCP. In their testbed, the authors find that UDP network capacities can increase by a factor of 3 to 4. Hidden nodes are significant because they cause a high loss rate (as in any 802.11 network). This causes TCP's throughput to drop, presenting few coding opportunities. The authors therefore engineer the network to have the same routing topology, but with all nodes within carrier-sense range. This avoids hidden terminals, and dramatically improves throughput (and coding opportunities).

Kyle Jamieson and Hari Balakrishnan. PPR: partial packet recovery for wireless networks. 2007. in Proc. ACM SIGCOMM, pages 409--420.
Whilst many wireless systems deployed today use large quantities of forward error correction (FEC) to try to avoid packet errors due to intereference, this is sometimes not sufficient, and packets must be re-transmitted. However, much of the time the FEC is also a waste of transmission throughput. The authors therefore propose the Partial Packet Recovery (PPR) scheme, where by using hints from the PHY layer they are able to determine which portions of a partially corrupted packet are still useful. Those portions that are corrupt are requested to be re-transmitted. In addition, the scheme adds a post-amble to frames, to ensure that even if the pre-amble is corrupted, a frame can be decoded by synchronising with a known sequence after the data portion. The scheme is evaluated on a 31 node Zigbee testbed, finding that in a busy network PPR improves by a factor of four the number of correct bits delivered by the PHY to higher layers; the aggregate throughput improves by between a factor of 2 and 4 depending on network load; the scheme performs best at low SNRs; the ARQ scheme proposed reduces re-transmissions by a median factor of 2.


Graph Theory

Alan Jones and Martin Brown. Choice Routing: Technical Overview. 2006. Camvit Ltd..

Boris V. Cherkassky and Andrew V. Goldberg. Negative-cycle detection algorithms. 1999. Mathematical Programming, 85(2):277--311.
Survey of different methods of detecting negative cycles in directed graphs. Describes a generalisation of the Bellman-Ford-Moore algorithm, then describes each of the cycle detection methods in terms of the operations used in the generalisation. An interesting observation used in many detection strategies is that the predecessor graph generated as a shortest path algorithm is run will contain a cycle if a cyle exists in the graph being used for routing. Also, Bellman-Ford-Moore can be speeded up if the only vertices that are scanned at each pass are those that were touched in the previous pass. Ensuring this is non-trivial, and is the subject of various algorithms outlined in this survey. The authors have implemented all the algorithms described, and in this paper compare them against each other according to various benchmarks.

M. Gouda and M. Schneider. Maximizable Routing Metrics. 1998. in Proc. International Conference on Network Protocols, pages 71--78.
Examines the conditions necessary for a routing metric to be maximisable, i.e. yield routes which have the maximum total metric between two given points. The authors point out that boundedness (the property that given two edges A and B, if the metric for A is greater than that for B, then composing both edges individually with a single third edge will result in composed metrics that are still subject to the same ordering), and monotonicity (the property that adding an edge to an existing route must mean that the resulting route has a metric that is smaller or equal to the original route) that must hold for a maximisable routing metric. How maximisable routing metrics are composed is also examined. Various examples that do and do not meet these conditions are presented. The IGRP routing protocol is then analysed, and the authors show that the metric does not meet the conditions given above, and hence the metric is not guaranteed to always give the globally optimal routes. A good, concise but well thought-through paper.

Mohamed G. Gouda and Marco Schneider. Maximizable Routing Metrics. 2003. IEEE/ACM Transactions on Networking, 11(4):663--675.
Extended version of the above paper. Adds a section on additive metric composition, and briefly overviews link-state routing with with source routing of packets.

Jo~ao Lu'is Sobrinho. Network Routing with Path Vector Protocols: Theory and Applications. 2003. in Proc. ACM SIGCOMM, pages 49--60.
Examines how path vector routing protocols can be cast into a formal algebra. In particular, for globally optimal shortest paths, the weighting function used must be monotonic (i.e. adding edges to a route never lowers the total route metric), and isotonic (i.e. if the metric of edge A is greater than edge B, composing each edge with the same third edge will still mean that the combined metric for A is greater than the combined metric for B). These properties are then used to examine how IGRP and customer-provider relationships in BGP (amonng other examples) can be expressed in the algebra, and hence show whether such algorithms yield optimal paths. The author also examines the freeness property, which states that whatever preference a node uses tochoose between paths that have equal weights, such a preference will not prevent the algorithm arriving at the global optimum, provided that the freeness conditions is enforced. This condition states that given a cycle, and paths originating at each node in the cycle, all of equal weight, at least one path will increase in weight as it traverses the cycle. In the presence of strict monotonicity (i.e. where adding any edge excludes the possibility that the metric of a route remains constant, rather than increasing), the freeness condition always holds.

William W. Hardgrave and George L. Nemhauser. On the Relation Betweeen the Traveling-Salesman and the Longest-Path Problems. 1962. Operations Research, 10(5):647--657.
Describes how the traveling-salesman problem is a special case of the longest path problem. The authors admit that both problems do not have efficient general solutions as yet. However, they hope that the longest path problem will prove easier to solve. The remainder of the paper concerns possible algorithms for solving the longest path problem.

S. N. Narahari Pandit. Some Observations on the Longest Path Problem. 1964. Operations Research, 12(2):361--364.
Letter to the editor. Notes that the paper above transforms one problem into another, a technique useful in other contexts, but in this context neither problem has a known efficient solution. Also notes that the transformation of the problem retains one key structural characteristic, namely that the path should pass through all nodes exactly once, and hence there appears no benefit in not attempting to solve the original problem directly.

Ryuhei Uehara and Yushi Uno. Efficient Algorithms for the Longest Path Problem. 2004. in Proc. International Symposium on Algorithms and Computation, pages 871--883.
The authors generalise the linear time algorithm for the longest path problem for trees, and apply it to weighted trees, block graphs, ptolemaic graphs, and cacti.

Maria Grazia Scutella. An approximation algorithm for computing longest paths. 2003. European Journal of Operational Research, 148(3):584--590.
Using a colour coding method described by previous authors that could find longest paths of cardinality k, the author generalises the algorithm in order to take account of edges with non-unit weights, and hence uses it to find longest paths in terms of those weights that have a cardinality of up to any fixed k.

Edsger W. Dijkstra. A note on two problems in connexion with graphs. 1959. Numerische Mathematik, 1:269--271.

Richard Bellman. On A Routing Problem. 1958. Quarterly of Applied Mathematics, 16(1):87--90.

Frode Eika Sandnes and Oliver Sinnen. Stochastic DFS for Multiprocessor Scheduling of Cyclic Taskgraphs. 2005. Chapter in Parallel and Distributed Computing: Applications and Technologies. Springer-Verlag.
Examines how to schedule task graphs containing loops. Presents an algorithm to detect cycles in task graphs, and removes the final edge in any cycle. The cycle removal is random (stochastic) because the algorithm progresses by picking an edge at random from the adjacency list of the node currently being processed, and proceeding in this way on a depth-first search (DFS) basis.

Michael S. Gudaitis and Gary B. Lamont and Andrew J. Terzuoli. Multicriteria vehicle route-planning using parallel A * search. 1995. in Proc. ACM symposium on Applied computing, pages 171--176. ACM.
Describes the problem of routing an aircraft between two points so as to avoid exposure to radar that exceeds a certain amount. The cost function used combines Euclidean distance and radar exposure. The weightings assigned to each change as the route is built up, with more weight being given to radar exposure as the total (consecutive) radar exposure increases. The authors note that this approach does not give globally optimal solutions, but this appears to be enough for this particular application.

Matthias Ehrgott and Xavier Gandibleux. A survey and annotated bibliography of multiobjective combinatorial optimization. 2000. OR Spectrum, 22(4):425--460.
Good survey of different algorithms for solving classes of problem where there are multiple criteria to be optimised against. The most common understanding of a solution is Pareto optimality, where a solution is deemed efficient (non-dominated) if for all the criteria it is better or equal to any other solution that can be found (note that it must be strictly better for one of the criteria). Another definition is lexicographic ordering, where we deem one criterion more important than another, and hence concentrate on minimising that criterion; only when two solutions are equal for that criterion do we examine further criteria, in descending order of importance. The Max-ordering problem develops this by taking each solution, ordering its characteristics by size, and then comparing all solutions lexicographically. To decide amongst the many Pareto efficient solutions a problem may have, it is common to combine the characteristics of each solution into a single scalar value. The authors note that because multiobjective combinatorial optimisation deals with discrete (rather than continuous) quantities, the set of all efficient solutions cannot be determined by only using weighted linear combinations of solutions' characteristics. The solutions that cannot be found in such ways are classed as nonsupported efficient solutions. Many algorithms ignore this class of solutions altogether. Apart from weighted linear combination, another method is the compromise solution method. Given details of what would be an ideal solution, the algorithm tries to find the solution with the minimum of the maximum difference between each of that solution's characteristics and those of the ideal solution. Unfortunately this is an NP-complete problem. Dynamic programming and approximation methods (simulated annealing, tabu search, genetic algorithms) are also briefly covered. The algorithms found in the literature are then classified in detail. Various general remarks are also made.

Chi Tung Tung and Kim Lin Chew. A multicriteria Pareto-optimal path algorithm. 1992. European Journal of Operational Research, 62(2):203--209.
Describes the implementation of an algorithm to provide Pareto efficient solutions to the multicriteria routing problem. Essentially, solutions are computed by recording the sums of each criterion at each node. An evaluation function is then used to compare different nodes to determine which should be investigated further (i.e. which node's outgoing links should be traversed in the next iteration). The evaluation function proposed is the sum of all the criteria.

Konstantinos G.Zografos and Konstantinos N. Androutsopoulos. Algorithms for Itinerary Planning in Multimodal Transportation Networks. 2008. IEEE Transactions on Intelligent Transportation Systems. (Accepted for future publication.)
Describes how journeys by public transport can be planned using shortest path algorithms. The authors include a notion of the walking time between transport hubs that are nearby. By using lexicographical ordering on the various criteria that they deem important to a user, they are able to derive routes that achieve a particular arrival time requirement, and that can also traverse a particular intermediate point. The authors note that by using lexicographical ordering they relieve the user of having to choose between various solutions that do not dominate each other. However, they also note that inclusion of economic cost is also an important factor that would mean that such a range of options should be presented.

L. R. Ford Jr. and D. R. Fulkerson. Flows in Networks. 1962. Princeton University Press.
Introduces the idea of maximum flow routing, where the aim is to find the route between the source and destination that has the maximum capacity, i.e. where the size of the smallest bottleneck is largest.

John Current and HoKey Min. Multiobjective design of transporation networks: Taxonomy and Annotation. 1986. European Journal of Operational Research, 26(2):187--201.
Old survey of papers on multiobjective routing.

John Current and Michael Marsh. Multiobjective Transportation Network Design and Routing Problems: Taxonomy and Annotation. 1993. European Journal of Operational Research, 65(1):4--19.
Updated survey of papers on multiobjective routing.

C. Peter Keller. Algorithms to Solve the Orienteering Problem: A Comparison. 1989. European Journal of Operational Research, 41(2):224--231.
The orienteering problem (OP) is similar to the travelling salesman problem, but there is a reward for visiting each node, as well as a distance penalty in travelling to it, known as the Multiobjective Vending Problem (MVP). The objective is to maximise reward and minimise penalty. The constraint method is one approach proposed for MVP, which places a bound on the total penalty, and finds the route with the maximum reward. It then increases the penalty threshold and does so again. In this way the best trade-off route can be found. Note that the OP has the property that the start and finish are set to the same point (or are very close together, so can be assumed to be co-located). The MVP method is compared to three other algorithms, finding it worked best overall in the three scenarios tested.

Jared L. Cohon. Multiobjective Programming and Planning. 1978. Academic Press.
Introduction to the field.

Robert B. Dial. A Model and Algorithm for Multicriteria Route-mode Choice. 1979. Transportation Research, 13B(4):311-316.
Describes an algorithm to enumerate the possible routes that each satisfy a total user cost. This function is a weighted combination of traversal time and cost, with the weightings being user specific. By beginning with finding the least cost and the least time routes, the algorithm proceeds to find routes with other values of the weights, in order that the user may select that which is most palatable to them. Each resulting route is the minimisation of the total cost funciton using the selected weights.

T. Tsiligirides. Heuristic Methods Applied to Orienteering. 1984. Journal of the Operational Research Society, 35(9):797--809.
In the $S$-algorithm, a desirability ranking is assigned to each node, based on the reward conferred by that node, $s_j$, and the penalty associated with travelling from the last chosen node to it, $t_mathrmlast,j$. This is expressed as $A_j=(s_j/t_mathrmlast,j)^4$. One of the four top nodes in this ranking is randomly selected. The sequence is further shuffled to see whether a more optimal route can be realised. Finally, nodes are inserted and removed to see whether a better solution can be found. The $D$-algorithm divides the area into sectors in concentric rings. Sectors are varied by modifying the radii and axes used to derive the rings. On each run, 48 sectors are examined, and all nodes within each of those sectors visited, subject to the penalty threshold not being exceeded. Each route is then subjected to the $S$-algorithm for improvement. The algorithms are evaluated on three small test cases. The key point to note is that the first algorithm is stochastic, i.e. on large datasets there is no guarantee that it will be heading in the correct direction for a solution. The separate problem of how scores should be assigned in orienteering events is also considered.

Mordechai I. Henig. The shortest path problem with two objective functions. 1985. European Journal of Operational Research, 25(2):281--291.
Assumes two monotonically decreasing objective functions can be used to provide coefficients for each edge. The aim is to generate the Pareto set of nondominated solutions, i.e. those for which no other path has a lower cost. Extreme nondominated paths are found by, for each value of $b$, a coefficient in a linear combination of the two objective functions, the nondominated path is found. Search methods are then presented to find which of these paths is optimal, based on the actual utility function the user has. The search is easy if, e.g. the utility function is quasiconcave and monotone decreasing.

Yang Byung Park and C. Patrick Koelling. A Solution of Vehicle Routing Problems in a Multiple Objective Environment. 1986. Engineering Costs and Production Economics, 10(2):121--132.
Examined the problem of how to route a vehicle to visit various depots in the minimum distance/time but whilst attempting to ensure that those depots that must be urgently visited are done so quickly. In addition, some depots may be dependent in some way on others. Hence, the vehicle must ensure that such dependencies are met as much as possible by visiting such subsets of depots in the correct order. These criteria evidently may conflict with each other. The authors formulated various constraints on the route (such as a maximum time length before the product being carried might deteriorate), then formed a linear combination of them. This utility function was too complex to find the optimal solution by direct goal programming. Instead the authors proposed an iterative procedure, where the next step in the route is found by a linear goal programming model, and is contingent on the route selected thus far. The key point concerning this approach is that it is emphnot guaranteed to generate optimal solutions, but nonetheless achieves results that satisfy the original goals. The authors applied this method to clusters of depots, rather than the whole network at a time, and hence proposed a clustering algorithm. They provided results of running times and how well the goals were achieved for three test cases.

M. Garey and D. Johnson. Computers and Intractability: A Guide to the Theory of NP Completeness. 1979. W. H. Freeman.
Contains the example of how it is NP complete to determine the existence of a path between two given nodes that is length less than a given value and weight less than another given value.

Arthur Warburton. Approximation of Pareto Optima in Multiple-Objective Shortest-Path Problems. 1987. Operations Research, 35(1):70--79.
Pointed out that by using such weighted linear combinations of objective functions and considering each in turn as a single-objective, shortest path problem, Henig (and others) risked missing large segments of the solution space. Instead, Warburton proposed that the routing algorithm keep track of all the non-dominated paths found thus far to the node currently under consideration. When the next edge was added to each of the paths, those that were non-dominating would be deleted. However, such an approach has very high space complexity. Therefore, algorithms were proposed to reduce it to more manageable levels.

Piotr Czy.zak and Andrzej Jaszkiewicz. Pareto Simulated Annealing -- A Metaheuristic Technique for Multiple-Objective Combinatorial Optimization. 1998. Journal of Multi-Criteria Decision Analysis, 7:34--47.
In recent years a variety of meta-heuristic methods have arisen that aim to find nearly-optimal solutions and subsequently ``tweak'' them to make them more optimal. This paper describes how simulated annealing can be used for this purpose. Note that this is not a paper concerning routing.

M. Bazaraa and C. Shetty. Nonlinear Programming, Theory and Algorithms. 1979. Wiley.
Theorem 3.5.3 states that if a utility function is quasiconvex then the convex hull of the Pareto set will contain the optimal (extreme non-dominated) solution to the problem the utility function seeks to solve.


Misc. Security

F. Stajano. Security for Ubiquitous Computing. 2002. John Wiley and Sons.

M. Kuhn and R. Anderson. Tamper Resistance -- a Cautionary Note. 1996. in Proc. 2nd Usenix Workshop on Electronic Commerce, pages 1--11.

B. Preneel. Preimage Resistance. Unpublished.

E. Schenk and D. Bernstein. SYN Cookies -- Mailing List Archive. Unpublished.

A. Perrig and R. Canetti and J. Tygar and D. Song. The TESLA Broadcast Authentication Protocol. 2002. RSA CryptoBytes, vol 5.

R. Anderson. The Eternity Service. 1996. in Proc. Pragocrypt, pages 242--252.

Ari Juels and Ronald L. Rivest and Michael Szydlo. The Blocker Tag: Selective Blocking of RFID Tags for Consumer Privacy. 2003. in Proc. ACM Conference on Computer and Communications Security, pages 103--111. ACM Press.
To enhance privacy for RFID tags used in product labelling, the authors propose a blocker tag, that, when a reader queries a particular subset of the RFID tag ID space, responds with a signal that, in effect, makes it appear as all of the tags in the subset. This is due to the tree walking approach taken by RFID readers, where all tags with a particular ID prefix are queried. If there is a collision in the response, the reader must narrow the query range by adding one bit to the prefix. If for any response by a tag to be protected, the blocker tag also responds, causing a collision, then the reader must investigate the entire ID space. The authors go on to examine the designation of specific privacy zones in the ID space. A selective blocker tag can be produced to protect each different privacy zone. This can be further developed if the first bit of an RFID tag may be flipped: this means that at a shop, the tag can be presented with a tag specific key that causes it to flip the first bit of its ID, thus placing it into the privacy zone of a blocker tag that can be affixed to it. This prevents any unauthorised reading. Once the goods reach the user's home, the blocker tag can be removed, and the RFID tags in the goods are useful once more (e.g. in a fridge that knows its contents). This is in contrast to the current ``kill'' command for RFID tags, which renders them permanently useless.

William Enck and Patrick Traynor and Patrick McDaniel and Thomas La Porta. Exploiting Open Functionality in SMS Capable Cellular Networks. 2005. in Proc. ACM Conference on Computer and Communications Security (CCS). Alexandria, Virginia, USA.
DoS the phone network by flooding the signalling channel with SMS messages.

Ross Anderson and Mike Bond and Jolyon Clulow and Sergei Skorobogatov. Cryptographic Processors -- A Survey. 2006. Proceedings of the IEEE, 2(2):357--369.
Title says it all.

Alastair R. Beresford and Frank Stajano. Location Privacy in Pervasive Computing. 2003. IEEE Pervasive Computing, 2(1):46--55.


David Cottingham< Email: david.cottingham.nospam@cl.cam.ac.uk >