Mark Weiser's vision that ubiquitous computing will overcome the problem of information overload by embedding computation in the environment is on the verge of becoming a reality. Nevertheless today's technology is now capable of handling many different forms of multimedia that pervade our lives and as a result is creating a healthy demand for new content management and retrieval services. This demand is everywhere; it is coming from the mobile videophone owners, the digital camera owners, the entertainment industry, medicine, surveillance, the military, and virtually every library and museum in the world where multimedia assets are lying unknown, unseen and unused.
The volume of visual data in the world is increasing exponentially through the use of digital camcorders and cameras in the mass market. These are the modern day consumer equivalents of ubiquitous computers, and, although storage space is in plentiful supply, access and retrieval remain a severe bottle-neck both for the home user and for industry. This paper describes an approach, which makes use of a visual attention model together with a similarity measure, to automatically identify salient visual material and generate searchable metadata that associates related items in a database. Such a system for content classification and access will be of great use in current and future pervasive environments where static and mobile content retrieval of visual imagery is required.