Sunday, 9 October 2011

Stages in Multimedia Project


1.Require analysis , definition and system overview

  • define exactly what we want to do.
  • who is our audiences ,their age and their preferences in multimedia elements. Extracting the requirements from users and gauging the feasibility of the project. (The client usually know what they want but often have incomplete, ambiguous or contradictory requirements or even non-realistic requirements. It is the job of software developers and project managers to steer them in the right direction and to clarify the requirements) 
  • system overview is to see the big outline of the project and understand the steps need to be carried out. 
  • the idea is expanded, trimmed, manipulated, altered and formed into a complete concept.
  • determine and document the hardware and software environment, the functions or features the software must have as well as the security requirements.
  • Specification and Requirement Analysis - vary a lot depending on the formality, size, and scope of the project and the methodology used in an organization. (In general, it involves taking the requirements and describing them in a way meaningful to programmers)   
  • create a prototype.
  • present the prototype to the client. If they agree, proceed with the next step.


2. Estimation

  • have to be able to select from different options discussing the platforms , technologies and tools that will be used in the project.
  • clearly list down what is included in the price and how much any additional features will cost.
  • come out with rough project schedule.

3.Functional specification and UI prototype

  • functional specification determines what exactly the target system must do.
  • All requirements should be thoroughly defined and documented.

4. Software architecture/design and test plan
  • software architecture designs included general and detailed design, low level details than the previous step.
  • general design/ low level details is about the structural design, development strategy.
  • detailed design is about the specification and documentation on each components are developed.
  • determine the system components that fulfill requirements and the way these components work together.

5. Development or Coding
  • this is pretty obvious, this is where the rubber meets the road and the program is built.
  • make sure the coding is understandable, organized and documented.

6. Implementation (coding) and testing

  • build target system regarding the specification developed in the previous stage.
  • transfer specification algorithms into a programming language.
  • to ensure components function together properly and project implementation meets system specification, comprehensive testing and rectifying errors identified is a must.
  • making sure the program functions as required and that the program's functionality is described fully to allow for easier maintenance in the future. 

6. Release, delivering and installation
  • process of getting the program into the hands of users. This usually includes more than just the act of distributing the software but also providing training to users.
  • formal acceptance testing (a series of end-to-end tests) to ensure system fulfills the acceptance requirements determined by the functional and specification.
  • once the above mentioned phase complete , the product is considered formally delivered and accepted.

7.Operation and maintenance

  • proper functioning of the software.
  • software should be continuously maintained to detect and correct the errors ,extend and improve the software.
  • concerned with maintaining the existing program by fixing any bugs and enhancing the software. 
  • With a well written system most of the actual maintenance is for small, incremental, improvements rather than actual bug fixes.

Reference:

Monday, 3 October 2011

Content-Based Information Retrieval/Multimedia Information Retrieval

MULTIMEDIA INFORMATION RETRIEVAL / CONTENT BASED INFORMATION RETRIEVAL (MIR/BCIR)



WHAT IS INFORMATION RETRIEVAL?

The simplest definition of information retrieval :

- a process of giving users new paradigms and methods for searching through the myriad variety of media over the world.

Before the multimedia is created and even up till today, librarians had been working on the traditional methods of information retrieval; by creating the reference cards (now they use database) and placing the objects especially the books on physical locations by following the classifications. IR deals with the representation , storage and retrieval of unstructured data.

With the rising popularity of rich media services such as Flickr, YouTube, and Jumpcut, new challenges in large scale multimedia information retrieval have emerged that not only rely on meta-data but on content-based information retrieval combined with the collective knowledge of users and geo-referenced meta-data that is captured during the creation process.






EXAMPLES OF SEARCH ENGINES IN WORLD WIDE WEB
EXAMPLES OF SEARCH ENGINES IN WORLD WIDE WEB



WHAT IS MULTIMEDIA?
Multimedia on the other hand is any combination of two or more of the elements; animation, video, art, audio and text delivered digitally or electronically.

WHAT IS MULTIMEDIA INFORMATION RETRIEVAL?
Multimedia information retrieval means the process of searching for and finding multimedia documents; the corresponding research field is concerned with building the best possible multimedia search engines. The intriguing bit here is that the query itself can be a multimedia excerpt. MIR is about the search for knowledge in all it forms, everywhere. MIR store and retrieve attributes,text,2D grey-scale & color images,1D time series,digitized voice/music and video. Indeed, Rowe and Jain 2005 mirrored the sentiment as "make capturing, storing, finding, and using digital media an everyday occurance in our computing environment." Nowadays, the main problem has been how to improve multimedia retrieval by using content-based methods. Furthermore, content-based methods can improve retrieval accuracy.

' Currently, the fundamental problem has been how to enable or improve multimedia retrieval using content-based methods. Content-based methods are necessary when text annotations are nonexistent or incomplete. Furthermore, content-based methods can potentially improve retrieval accuracy even when text annotations are present by giving additional insight into the media collections. '

Untitled.png


METHODS OF RETRIEVING IT

There are 3 methods of retrieving information in a digital library.


The first and the easiest way is free browsing

-By this means, a user browses through a collection and looks for desired information.

The second method—the most popular technique used today—is text- based retrieval

-Through this method, textual information (full text of text-based documents and/or metadata of multimedia documents) is indexed so that a user can search the digital library by using keywords or controlled terms. 

The third method is content-based retrieval
, which enables a user to search multimedia information in terms of the actual content of image, audio, or video (Marques and Furht 2002). Some content features that have been studied so far include color, texture, size, shape, motion, and pitch.
Similarly, a patron has a segment of music audio, about which he or she knows nothing but wants to find out more. By using the content-based audio retrieval system, the patron can get similar audio clips with detailed information from a digital music library, and then listen to them to find an exact match. This procedure will be much easier than doing a search on a text-based music search system. It is definitely helpful if a user can search this non-textual information by styles and features.




HOW DOES IT WORKS?


The underlying techniques and common approaches are used to facilitate multimedia search engines from metadata driven retrieval, via piggy-back text retrieval where automated processes create text surrogates for multimedia, automated image annotation and content-based retrieval.

The multimedia retrieval system uses multimodal information
-visual content, -GPS data and -audio information annotated at capture time.

This system can be used via a Web site or when people are visiting the place. The multimedia retrieval system was evaluated using a mobile interface in a cultural heritage site where the personal media can be shared by visitors and can be used to guide the visit.

In simple words, a typical content-based information retrieval system works in this way:


First, for each multimedia file in the database, certain feature information (e.g., color, motion, or pitch) is extracted, indexed, and stored.

Second, when a user composes a query, the feature information of the query is calculated as vectors. 

Finally, the system compares the similarity between the feature vectors of the query and multimedia data, and retrieves the best matching records. If the user is not satisfied with the retrieved records, he or she can refine the search results by selecting the most relevant ones to the search query, and repeat the search with the new information. 



retriever.jpeg


WHAT ARE THE FUNDAMENTAL NECSSITIES OF MIR SYSTEM
1. Searching for particular media item
2. Browsing and summarizing a media collection.

GOALS OF IR
1. Accuracy : retrieve documents that the users expect in the answer with as few inaccurate answers as possible.
2. Speed : retrievals has to be fast and the system responds in real time.
3. Convenient : retrieve any information by typing in keywords anywhere anytime.


WHAT ARE THE APPLICATIONS?
1.In tourism – For example, when you walk around in an unknown place and stumble across an interesting landmark, would it not be great if you could just take a picture with your mobile phone and send it to a service that finds a similar picture in a database and tells you more about the building.
2. In advertising – For example, customers take pictures from print-media adverts, send them in and receive promotions or product information vouchers and so on.
3. In medical Image Database – For example, when the doctors have patients suffering from short of breath, x-rays can be taken and matched with those images in the database, they could return data on the diagnosed case.
4. In financial and marketing - For example, in stock prices and sales.
5. In scientific databases - For example, sensor data in weather, geological , and environment.
6. In crime investigation - For example, use fingerprints in criminal investigation suspects.

CONTENT BASED INFORMATION RETRIEVAL

CONTENT BASED IMAGE RETRIEVAL
One of the most well-known CBIR systems is query by image content (QBIC), which was developed by IBM. It uses several different features, including color, sketches, texture, shape, and example images to retrieve images from image and video databases. Since its launch in 1995, the QBIC model has been employed for quite a few digital libraries or collections. 



e.g. : One recent adopter is the State Hermitage Museum in Russia 
(www.hermitage. ru), 
which uses QBIC for its Web-based digital image collection. Users can find artwork images by selecting colors from a palette or by sketching shapes on a canvas. The user can also refine existing search results by requesting all artwork images with similar visual attributes. 
In figure 2.1, the user chooses a color from the palette and composes the color schema of artwork he or she is looking for. Figure 2.2 shows the artwork images that match the query schema. 


e.g.: Another example of digital libraries or collections that have incorporated CBIR technology is the National Science Foundation’s International Digital Library Project (www.memorynet.org), a project that is composed of several image collections. The information retrieval system for these collections includes both a traditional text-based search engine and a CBIR system called SIMPLIcity (Semantics-sensitive Integrated Matching for Picture Libraries) developed by Wang et al. (2001) of Pennsylvania State University. 

Figure 2.1
Screen_Shot_2011-10-03_at_12.53.38_AM.png
Screen_Shot_2011-10-03_at_12.54.25_AM.png


From the front page of these image collections, a user can choose to display a random group of images.
Below each image is a “similar” button; clicking this allows the user to view a group of images that contain similar objects to the previously selected one. By providing feedback to the search engine this way, the user can find images of desired objects without knowing their names or descriptions.
Simply put, SIMPlicity segments each image into small regions, extracts several features (such as color, location, and shape) from these small regions, and clas- sifies these regions into some semantic categories (such as textured/nontextured and graph/photograph). When computing the similarity between the query image and images in the database, all these features will be considered and integrated, and best matching results will be retrieved.

e.g : Similar applications of CBIR technology in digital libraries include the University of California–Berkeley’s Digital Library Project (http://bnhm.berkeley.edu), the National STEM Digital Library (ongoing), and Virginia Tech’s anthropology digital library, ETANA (ongoing).

Since CBIR is the major research area and has the longest history in content-based information retrieval, there are many models, products, and ongoing projects in addi- tion to the above examples. As image collections become a significant part of digital libraries, more attention has been paid to possibilities of providing content-based image search as a complement to existing metadata search. 


Videos on how a content based image retrieval works

content-based image retrieval 1

Content based Image retrieval


Content Based Audio retriever

Compared with CBIR, content-based audio retrieval (CBAR) is relatively new, and fewer research projects on it can be found, partly because the digitization technology for video appeared later than those for image and audio. In general, existing CBAR approaches start from the content analysis of audio clips.
As digital video Websites such as YouTube and Google Video become more popular, how to retrieve desired video clips effectively is a great concern. Searching by some features of video, such as motion and texture, can be a good complement to the traditional text-based search method. 


e.g. : One of the earliest examples is the VideoQ system developed by Chang et al. (1997) of Columbia University (www.ctr.columbia.edu/VideoQ), which allows a user to search video based on a rich set of visual features and spatio-temporal relationships. The video clips in the data- base are stored as MPEG files. Through a Web interface, the user can formulate a query scene as a collection of objects with different attributes, including motion, shape, color, and texture. Once the user has formulated the query, it is sent to a query server, which contains several databases for different content features. On the query server, the similarities between the features of each object specified in the query and those of the objects in the database are com- puted; a list of video clips is then retrieved based on their similarity values. For each of these video clips, key-frames are dynamically extracted from the video database and returned to browser. The matched objects are highlighted in the returned key-frame. The user can interactively view these matched video clips by simply clicking on the key- frame. Meanwhile, the video clip corresponding to that key-frame is extracted from the video database (Chang et al. 1997). Figures 5.1–5.2 show an example of a visual search through the VideoQ system. 
An example of this content analysis is extracting basic audio elements, such as duration, pitch, amplitude, brightness, and bandwidth (Wold et al., 1996). 
aScreen_Shot_2011-10-03_at_5.12.29_PM.png



Content Based Motion Retrieval / Content Based Video retrieval
In the area of video retrieval, the main focus in the mid 90s was toward robust shot boundary detection of which the most common approaches involved thresholding the distance between color histograms corresponding to two consecutive frames in a video [Flickner, et al. 1995]. Hanjalic, et al. [1997] proposed a method which overcame the problem of subjective user thresholds. Their approach was not dependent on any manual parameters. It gave a set of keyframes based on an objective model for the video information flow. Haas, et al. [1997] described a method to use the motion within the video to determine the shot boundary locations.
Screen_Shot_2011-10-03_at_5.00.25_PM.png



' Yosi Mass - Search in audio-visual content using P2P information retrieval
As the coordinator of the SAPIR european project, Yosi Mass discussed that today, Web searches are dominated by search giants such as Google, Yahoo, or MSN that deploy a centralized ap- proach to indexing and utilize text-only indexes enriched by page rank algorithms. Consequently, while it is possible to search for audio-visual content, the search is limited to associated text and metadata annotations. Supporting real content-based, audio-visual search requires media-specific understanding and extremely high CPU utilization, which would not scale in today’s centralized solutions. He claimed that large-scale, distributed P2P architectures will make it possible to search audio-visual content using the query-by-example paradigm. '

Video on Scalable Video Motion Retrieval
http://www.youtube.com/watch?v=1p4eAK39OU8


Conclusion for CBIR
There is no doubt that content-based information retrieval technology is an emerging trend for digital library development and will be an important comple- ment to the traditional text-based retrieval technology. The ideal CBIR system can semantically understand the information in a digital library, and render users the most desirable data. However, the machine understand- ing of semantic information still remains to be a great difficulty. Therefore, most current research projects, including those discussed in this paper, deal with the understanding and retrieval of lower-level features or physical features of multimedia content. Certainly, as related disciplines such as computer vision and artificial intelligence keep developing, more researches will be done on higher-level feature-based retrieval. 
In addition, the growing varieties of multimedia content in digital libraries have also brought many new challenges. For instance, 3-D models now become important components of many digital libraries and museums. Content-based retrieval technology can be a good direc- tion for this type of content, since the shapes of these 3-D objects are often found more effectively if the user can compose the query visually. New CBIR approaches need to be developed for these novel formats. 
Furthermore, most CBIR projects today tend to be Web-based. By contrast, many project were based on client applications in the 1990s. These Web-based CBIR tools will have significant influence on digital libraries or repositories, as most of them are also Web-based. Particularly in the age of Web 2.0, some large digital repositories—such as Flickr for images and YouTube and Google Video for video—are changing people’s daily lives. The implementation of CBIR will be a great benefit to millions of users. 
Since the nature of CBIR is to provide better search aids to end users, it is extremely important to focus on the actual user’s needs and how well the user can use these new search tools. It is surprising to find that little usability testing has been done for most CBIR projects. Such testing should be incorporated into future CBIR research before it is widely adopted.

THIS IS AN OVERVIEW OF CBIR but it's on a mobile phone. Check this link out because it's very useful. :D

Online Content-Based Information Retrieval (CBIR) Systems for Mobile Devices


REFERENCE

1.**http://www.morganclaypool.com/doi/abs/10.2200/S00244ED1V01Y200912ICR010?journalCode=icr**
2.**http://books.google.com.my/books?hl=en&lr&id=OHjNV4i3jdEC&oi=fnd&pg=PR13&dq=multimedia+information+retrieval&ots=ChqL35YKfQ&sig=jjHlw27odSDlj-CIqH42eTAGgOc#v=onepage&q&f=false**
3.**http://www.liacs.nl/~mlew/mir.survey16b.pdf**
4.**http://www.ala.org/ala/mgrps/divs/lita/publications/ital/27/1/wan.pdf**
5.http://www.intelligence.tuc.gr/~petrakis/courses/multimedia/retrieval.pd
6.http://www.liacs.nl/~mlew/mir.survey16b.pdf
7. http://www.ala.org/ala/mgrps/divs/lita/publications/ital/27/1/wan.pdf