Handbook of Multimedia for Digital Entertainment and Arts- P15 ppt

30 702 0
Handbook of Multimedia for Digital Entertainment and Arts- P15 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

416 Y.M. Ro and S.H. Jin above, the frames of soccer videos were categorized into four view types, i.e., V D fC; M; G; G p g. The processing time for the view decision in soccer videos was measured. Table 1 shows the time measured for the view type decision for different terminal computing power. As shown, the longest time is taken to detect the global view with goal post. From the experimental results, the first condition of the soccer video, meaning its stability in real-time for the proposed filtering system, can be found by substituting these results to Eq. (3). For the soccer video, the first condition can be described as,  D N f PT.G p / Ä 1; therefore;Nf Ä 1 ı PT.G p /: (6) The second condition is verified by evaluating the filtering performance of the proposed filtering algorithm. Figure 9 shows the variation of the filtering perfor- mance with respect to sampling rate. As shown, the performance (recall rate in the figure) decreases as the sampling rate decreases. From Fig. 9, it is shown that the maximum permissible limit of sampling rate is determined by the tolerance .T fp / of filtering performance. When the system permits about 80% filtering performance of T fp , it is observed that the sampling rate, f s , becomes 2.5 frames per second by the experimental result. As a result of the experiments, we obtain the system requirements for real-time filtering of soccer videos as shown in Fig.10. Substituting PT .G p /sofTable1into Table 1 Processing time for the view type decision Terminal View T 1 T 2 T 3 EŒPT.C / 0.170 sec. 0.037 sec. 0.025 sec. EŒPT.M / 0.270 sec. 0.075 sec. 0.045 sec. EŒPT.G/ 0.281 sec. 0.174 sec. 0.088 sec. EŒPT.G p / 0.314 sec. 0.206 sec. 0.110 sec. Fig. 9 Variation of filtering performance according to sampling rate 18 Real-Time Content Filtering for Live Broadcasts in TV Terminals 417 Eq. (6), we acquire the number of input channels and frame sampling rates available in the used filtering system. As shown, the number of input channels depends on both sampling rate and terminal capability. By assuming the confidence limit of the filtering performance, T fp , we also get the minimum sampling rate from Fig. 10. Fig. 10 The number of input channels enables the real-time filtering system to satisfy the filtering requirements in (a)Terminal1,(b) Terminal 2, and (c) Terminal 3. 1  and 2  lines indicate the conditions of Eq. 6 and Fig. 9, respectively. 1  line shows that the number of input channels is inversely proportional to b with the processing time of G p . 2  line is the range of sampling rate required to maintain over 80% filtering performance. And 3  line (the dotted horizontal line), represents the minimum number of channels, i.e., one channel 418 Y.M. Ro and S.H. Jin To maintain stability in the filtering system, the number of input channels and the sampling rate should be selected in the range where the three conditions by 1,2, and 3 lines meet. Supposing that the confidence limit of the filtering performance is 80%, Figure 10 illustrates the following results: one input channel is allowable for real-time filtering in Terminal 1 at sampling rates between 2.5 and 3 frames per second. In Terminal 2, one or two channels are allowable at sampling rates between 2.5 and 4.5 frames per second. Terminal 3 can have less than four channels at sam- pling rates between 2.5 and 9 frames per second. The results show that Terminal 3, which has the highest capability, has a higher number of input channels for real-time filtering than the others. We implemented the real-time filtering system on our test-bed [27]asshownin Fig. 11. The main screen shows a drama channel assumed to be the favorite station of the TV viewer. And the screen box at the bottom right in the figure shows the filtered broadcast from the channel of interest. In this case, a soccer video is selected as the channel of interest and “Shooting” and “Goal” scenes are considered as the meaningful scenes. To perform the filtering algorithm on the soccer video, the CPU usage and mem- ory consumption of each terminal should remain stable. Each shows a memory consumption of between 32 and 38 Mbytes, and an average of 85% .T 1 /, 56% .T 2 /, and 25% .T 3 / CPU usage time by a Window’s performance monitor. Fig. 11 Screen shot to run real-time content filtering service with a single channel of interest 18 Real-Time Content Filtering for Live Broadcasts in TV Terminals 419 Discussion For practical purposes, we will discuss the design, implementation and integration of the proposed filtering system with a real set-top box. To realize the proposed system, computing power to calculate and perform the filtering algorithm within the limited time is the most important element. We expect that TV terminals equipped with STB and PVR will evolve into multimedia centers in the home with computing and home server connections [28, 29]. The terminal also requires a digital tuner enabling it to extract each broadcasting stream time-division, or multiple tuners for the filtering of multiple channels Next, practical implementation should be based on conditions such as buffer size, the number of channels, filtering performance, sampling rate, etc., in order to stabilize filtering performance. Finally, the terminal should know the genre of the input broadcasting video because the applied filtering algorithm depends on video genre. This could be resolved by the time schedule of an electronic program guide. The proposed filtering system is not without its limitations. As shown in previous works [21–24], the filtering algorithm requires more enhanced filtering performance with real-time processing. As well, it is necessary that the algorithm be extendable to other sport videos such as baseball, basketball, golf, etc; and, to approach a real environment, we need to focus on the evaluation of the corresponding system uti- lization, e.g., CPU usage and memory consumption as shown in [13] and [30]. Conclusion In this chapter, we introduced a real-time content filtering system for live broad- casts to provide personalized scenes, and analyzed its requirements in TV terminals equipped with set-top boxes and personal video recorders. As a result of experi- ments based on the requirements, the effectiveness of the proposed filtering system has been verified. By applying queueing theory and a fast filtering algorithm, it is shown that the proposed system model and filtering requirements are suitable for real-time content filtering with multiple channel inputs. Our experimental results revealed that even a low-performance terminal with 650MHz CPU can perform the filtering function in real-time. Therefore, the proposed queueing system model and its requirements confirm that the real-time filtering of live broadcasts is possible with currently available set-top boxes. References 1. TVAF, “Phase 2 Benchmark Features,” SP001v20, http://www.tv-anytime.org/, 2005, pp. 9. 2. N. Dimitrova, H J. Zhang, B. Shahraray, I. Sezan, T. Huang, and A. Zakhor, “Applications of Video-Content Analysis and Retrieval,” IEEE Multimedia, Vol. 9, No. 3, 2002, pp. 42–55. 420 Y.M. Ro and S.H. Jin 3. S. Yang; S. Kim; Y. M. Ro, “Semantic Home Photo Categorization,” IEEE Trans. Circuits and Systems for Video Technology, Vol. 17, 2007, pp. 324–335. 4. C W. Ngo, Y F. Ma, and H J. Zhang, “Video Summarization and Scene Detection by Graph Modeling,” IEEE Trans. Circuits and Systems for Video Technology, Vol. 15, No. 2, 2005, pp. 296–305. 5. H. Li, G. Liu, Z. Zhang, and Y. Li, “Adaptive Scene-Detection Algorithm for VBR Video Stream,” IEEE Trans. Multimedia, Vol. 6, No. 4, pp. 624–633, 2004. 6. Y. Li, S. Narayanan, and C C. Jay Kuo, “Content-Based Movie Analysis and Indexing Based on AudioVisual Cues,” IEEE Trans. Circuits and System for Video Technology, Vol. 14, No. 8, 2004, pp. 1073–1085. 7. J. M. Gauch, S. Gauch, S. Bouix, and X. Zhu, “Real Time Video Scene Detection And Classi- fication,” Information Processing and Management, Vol.35, 1999, pp. 381–400. 8. I. Otsuka, K. Nakane, A. Divakaran, K. Hatanaka and M. Ogawa, “A Highlight Scene Detec- tion and Video Summarization System using Audio Feature for a Personal Video Recorder,” IEEE Trans. Consumer Electronics, Vol. 51, No. 1, 2005, pp. 112–116. 9. S. H. Jin, T. M. Bae, Y. M. Ro, “Intelligent Broadcasting System and Services for Person- alized Semantic Contents Consumption” Expert system with applications, Vol. 31, 2006, pp. 164–173. 10. J. Kim, S. Suh, and S. Sull, “Fast Scene Change Detection for Personal Video Recorder,” IEEE Trans. Consumer Electronics, Vol. 49, No. 3, 2003, pp. 683–688. 11. J.S.Choi,J.W.Kim,D.S.Han,J.Y.Nam,andY.H.Ha,“Designandimplementationof DVB-T receiver system for digital TV,” IEEE Trans. Consumer Electronics, Vol. 50, No. 4, 2004, pp. 991–998. 12. M. Bais, J. Cosmas, C. Dosch, A. Engelsberg, A. Erk, P. S. Hansen, P. Healey, G. K. Klungsoeyr, R. Mies, J R. Ohm, Y. Paker, A. Pearmain, L. Pedersen, A. Sandvand, R. Schafer, P. Schoonjans, and P. Stammnitz, “Customized television: standards compliant advanced digital television,” IEEE Trans. Broadcasting, Vol. 48, No. 2, 2002, pp. 151–158. 13. N. Dimitrova, T. McGee, H. Elenbaas, and J. Martino, “Video content management in con- sumer devices,” IEEE Trans. Knowledge and Data Engineering, Vol. 10, Issue 6, 1998, pp. 988–995. 14. N. Dimitrova, H. Elenbass, T. McGee, and L. Agnihotri, “An architecture for video content filtering in consumer domain,” in Proc. Int. Conf. on Information Technology: Coding and Computing 2000, 27–29 March 2000, pp. 214–221. 15. D. Gross, and C. M. Harris, Fundamentals of Queueing Theory, John Wiley & Sons: New York, NY, 1998. 16. L. Kleinrock, Queueing System, Wiley: New York, NY, 1975. 17. K. Lee, and H. S. Park, “Approximation of The Queue Length Distribution of General Queues,” ETRI Journal, Vol. 15, No. 3, 1994, pp. 35–46. 18. Jr. A. Eckberg, “The Single Server Queue with Periodic Arrival Process and Deterministic Service Times,” IEEE Trans. Communications, Vol. 27, No. 3, 1979, pp. 556–562. 19. Y. Fu, A. Ekin, A. M. Tekalp, and R. Mehrotra, “Temporal segmentation of video objects for hierarchical object-based motion description,” IEEE Trans. Image Processing, vol. 11, Feb. 2002, pp. 135–145. 20. D. Zhong, and S. Chang, “Real-time view recognition and event detection for sports video,” Journal of Visual Communication & Image Representation, Vol. 15, No. 3, 2004, pp. 330–347. 21. A. Ekin, A. M. Tekalp, and R. Mehrotra, “Automatic Soccer Video Analysis and Summariza- tion,” IEEE Trans. Image Processing, Vol. 12, No. 7, 2003, pp. 796–807. 22. A. Ekin and A. M. Tekalp, “Generic Play-break Event Detection for Summarization and Hi- erarchical Sports Video Analysis,” in Proc. IEEE Int. Conf. Multimedia & Expo 2003, 2003, pp. 27–29. 23. M. Kumano, Y. Ariki, K. Tsukada, S. Hamaguchi, and H. Kiyose, “Automatic Extraction of PC Scenes Based on Feature Mining for a Real Time Delivery System of Baseball Highlight Scenes,” in Proc. IEEE Int. Conf. Multimedia and Expo 2004, 2004, pp. 277–280. 18 Real-Time Content Filtering for Live Broadcasts in TV Terminals 421 24. R. Leonardi, P. Migliorati, and M. Prandini, “Semantic indexing of soccer audio-visual se- quences: a multimodal approach based on controlled Markov chains,” IEEE Trans. Circuits and Systems for Video Technology, Vol. 14, No. 5, 2004, pp. 634–643. 25. P. Meer and B. Georgescu, “Edge Detection with Embedded Confidence,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 23, No. 12, 2001, pp. 1351–1365. 26. C. Wolf, J M. Jolion, and F. Chassaing, “Text localization, enhancement and binarization in multimedia documents,” in Proc. 16th Int. Conf. Pattern Recognition, Vol. 2, 2002, pp. 1037– 1040. 27. S. H. Jin, T. M. Bae, Y. M. Ro, and K. Kang, “Intelligent Agent-based System for Personalized Broadcasting Services,” in Proc. Int. Conf. Image Science, Systems and Technology’04, 2004, pp. 613–619. 28. S. Pekowsky and R. Jaeger, “The set-top box as multi-media terminal,” IEEE Trans. Consumer Electronics, Vol. 44, Issue 3, 1998, pp. 833–840. 29. J C. Moon, H S. Lim, and S J. Kang, “Real-time event kernel architecture for home-network gateway set-top-box (HNGS),” IEEE Trans. Consumer Electronics, Vol. 45, Issue 3, 1999, pp. 488–495. 30. B. Shahrary, “Scene change detection and content-based sampling of video sequences,” in Proc. SPIE, Vol. 2419, 1995, pp. 2–13. Chapter 19 Digital Theater: Dynamic Theatre Spaces Sara Owsley Sood and Athanasios V. Vasilakos Introduction Digital technology has given rise to new media forms. Interactive theatre is such a new type of media that introduces new digital interaction methods into theatres. In a typical experience of interactive theatres, people enter cyberspace and enjoy the de- velopment of a story in a non-linear manner by interacting with the characters in the story. Therefore, in contrast to conventional theatre which presents predetermined scenes and story settings unilaterally, interactive theatre makes it possible for the viewer to actually take part in the plays and enjoy a first person experience. In “Interactive Article” section, we are concerned with embodied mixed reality techniques using video-see-through HMDs (head mounted display). Our research goal is to explore the potential of embodied mixed reality space as an interactive theatre experience medium. What makes our system advantageous is that we, for the first time, combine embodied mixed reality, live 3D human actor capture and Ambient Intelligence, for an increased sense of presence and interaction. We present an Interactive Theatre system using Mixed Reality, 3D Live, 3D sound and Ambient Intelligence. In this system, thanks to embodied Mixed Real- ity and Ambient Intelligence, audiences are totally submerged into an imaginative virtual world of the play in 3D form. They can walk around to view the show at any viewpoint, to see different parts and locations of the story scene, and to follow the story on their own interests. Moreover, with 3D Live technology, which allows live 3D human capture, our Interactive Theatre system enables actors at different places all around the world play together at the same place in real-time. Audiences can see the performance of these actors/actresses as if they were really in front of them. Fur- thermore, using Mixed Reality technologies, audiences can see both virtual objects S.O. Sood Department of Computer Science, Pomona College, 185 East Sixth Street, Claremont, CA 91711 e-mail: sara@cs.pomona.edu A.V. Vasilakos (  ) Department of Theatre Studies, University of Peloponnese, 21100 Nafplio, Greece e-mail: vasilako@ath.forthnet.gr B. Furht (ed.), Handbook of Multimedia for Digital Entertainment and Arts, DOI 10.1007/978-0-387-89024-1 19, c Springer Science+Business Media, LLC 2009 423 424 S.O. Sood and A.V. Vasilakos and the real world at the same time. Thus, they can see not only actors/actresses of the play but the other audiences as well. All of them can also interact and participate in the play, which creates a unique experience. Our system of Mixed Reality and 3D Live with Ambient Intelligence is intended to bring performance art to the people while offering performance artists a creative tool to extend the grammar of the traditional theatre. This Interactive Theatre also enables social networking and relations, which is the essence of the theatre, by sup- porting simultaneous participants in human-to-human social manner. While Interactive Theater engages patrons in an experience in which they drive the performance, a substantial number of systems have been built in which the performance is driven by a set of digital actors. That is, a team of digital actors autonomously generates a performance, perhaps with some input from the audience or from other human actors. The challenge of generating novel and interesting performance content for digi- tal actors differs greatly by the type of performance or interaction at hand. In cases where the digital actor is interacting with human actors, the digital actor must un- derstand the context of the performance and respond with appropriate and original content in a time frame that keeps the tempo or beat of the performance in tact. When performances are completely machine driven, the task is more like creating or generating a compelling story, a variant on a classic set of problems in the field of Artificial Intelligence. In section “Automated Performance by Digital Actors” of this article, we survey various systems that automatically generate performance content for digital actors both in human/machine hybrid performances, as well as in completely automated performances. Interactive Theater The systematic study of the expressive resources of the body started in France with Francois Delsarte at the end of the 1800s [4, 5]. Delsarte studied how people ges- tured in real life and elaborated a lexicon of gestures, each of which was to have a direct correlation with the psychological state of man. Delsarte claimed that for every emotion, of whatever kind, there is a corresponding body movement. He also believed that a perfect reproduction of the outer manifestation of some passion will induce, by reflex, that same passion. Delsarte inspired us to have a lexicon of ges- tures as working material to start from. By providing automatic and unencumbering gesture recognition, technology offers a tool to study and rehearse theatre. It also provides us with tools that augment the actor’s action with synchronized digital multimedia presentations. Delsarte’s “laws of expression” spread widely in Europe, Russia, and the United States. At the beginning of the century, Vsevolod Meyerhold at the Moscow Art Theatre developed a theatrical approach that moved away from the naturalism of Stanislavski. Meyerhold looked to the techniques of the Commedia dell’Arte, pan- tomime, the circus, and to the Kabuki and Noh theatres of Japan for inspiration, and created a technique of the actor, which he called “Biomechanics.” Meyerhold was 19 Digital Theater: Dynamic Theatre Spaces 425 fascinated by movement, and trained actors to be acrobats, clowns, dancers, singers, and jugglers, capable of rapid transitions from one role to another. He banished vir- tuosity in scene and costume decoration and focused on the actor’s body and his gestural skills to convey the emotions of the moment. By presenting to the public properly executed physical actions and by drawing upon their complicity of imagi- nation, Meyerhold aimed at a theatre in which spectators would be invited to social and political insights by the strength of the emotional communication of gesture. Meyerhold’s work stimulated us to investigate the relationship between motion and emotion. Later in the century Bertold Brecht elaborated a theory of acting and staging aimed at jolting the audience out of its uncritical stupor. Performers of his plays used physical gestures to illuminate the characters they played, and maintained a distance between the part and themselves. The search of an ideal gesture that distills the essence of a moment (Gestus) is an essential part of his technique. Brecht wanted actors to explore and heighten the contradictions in a character’s behavior. He would invite actors to stop at crucial points in the performance and have them explain to the audience the implications of a character’s choice. By doing so he wanted the public to become aware of the social implications of everyone’s life choices. Like Brecht, we are interested in performances that produce awakening and reflection in the pub- lic rather than uncritical immersion. We therefore have organized our technology to augment the stage in a way similar to how “Mixed Reality” enhances or completes our view of the real world. This contrasts work on Virtual Reality, Virtual Theatre, or Virtual Actors, which aims at replacing the stage and actors with virtual ones, and to involve the public in an immersive narration similar to an open-eyes dream. English director Peter Brook, a remarkable contemporary, has accomplished a creative synthesis of the century’s quest for a novel theory and practice of acting. Brook started his career directing “traditional” Shakespearean plays and later moved his stage and theatrical experimentation to hospitals, churches, and African tribes. He has explored audience involvement and influence on the play, preparation vs. spontaneity of acting, the relationship between physical and emotional energy, and the usage of space as a tool for communication. His work, centered on sound, voice, gestures, and movement, has been a constant source of inspiration to many contem- poraries, together with his thought-provoking theories on theatrical research and discovery. We admire Brook’s research for meaning and its representation in the- atre. In particular we would like to follow his path in bringing theatre out of the traditional stage and perform closer to people, in a variety of public and cultural settings. Our Virtual theatre enables social networking by supporting simultaneous participants in human-to-human social manner. Flavia Sparacino at the MIT Media Lab created the Improvisational TheatreSpace [1], [2], which embodied human actors and Media Actors to generate an emergent story through interaction among themselves and the public. An emergent story is one that is not strictly tied to a script. It is the analog of a “jam session” in mu- sic. Like musicians who play together, each with their unique musical personality, competency, and experience, to create a musical experience for which there is no score, a group of Media Actors and human actors perform a dynamically evolving 426 S.O. Sood and A.V. Vasilakos story. Media Actors are autonomous agent-based text, images, movie clips, and audio. These are used to augment the play by expressing the actor’s inner thoughts, memory, or personal imagery, or by playing other segments of the script. Human actors use full body gestures, tone of voice, and simple phrases to interact with media actors. An experimental performance was presented in 1997 on the occasion of the Sixth Biennial Symposium on Arts and Technology [3]. Interactive Theater Architecture In this section, we will introduce the design of our Interactive Theatre Architecture. The diagram in Fig. 3 shows the whole system architecture. Embodied mixed reality space and Live 3D actors In order to maintain an electrical theatre entertainment in a physical space, the actors and props will be represented by digital objects, which must seamlessly appear in the physical world. This can be achieved using the full mixed reality spectrum of physical reality, augmented reality and virtual reality. Furthermore, to implement human-to-human social interaction and physical interaction as essential features of the interactive theatre, the theory of embodied computing is applied in the system. As mentioned above, this research aims to maintain human-to-human interaction such as gestures, body language and movement between users. Thus, we have de- veloped a live 3D interaction system for viewers to view live human actors in the mixed reality environment. In fact, science fiction has presaged such interaction in computing and communication. In 2001: A Space Odyssey, Dr. Floyd calls home using a videophone an early on-screen appearance of 2D video-conferencing. This technology is now commonplace. More recently, the Star Wars films depicted 3D holographic communication. Using a similar philosophy in this paper, we apply computer graphics to create real- time 3D human actors for mixed reality environments. One goal of this work is to enhance the interactive theatre by developing a 3D human actor capture mixed reality system. The enabling technology is an algorithm for generating arbitrary novel views of a collaborator at video frame rate speeds (30 frames per second). We also apply these methods to communication in virtual spaces. We render the image of the collaborator from the viewpoint of the user, permitting very natural interaction. Hardware setup Figure 1 represents the overall structure of the 3D capture system. Eight Dragonfly FireWire cameras, operating at 30 fps, 640 by 480 resolution, are equally spaced around the subject, and one camera views him/her from above. Three Sync Units [...]... some of these evaluations and identify examples that illustrate the importance of video browsing techniques for mobile video usage Offering such functionality given the small form factor of the screen and the limited interaction W H¨ rst ( ) u Faculty of Science, Utrecht University, Utrecht, The Netherlands e-mail: huerst@cs.uu.nl B Furht (ed.), Handbook of Multimedia for Digital Entertainment and Arts,... nature of the style In the task of creating a digital actor that is capable of performing alongside humans in an improvisational performance, several challenges must be addressed The digital actor must understand the context of the ongoing performance, it must generate novel and appropriate responses within this context, and it must make these contributions in a timely manner, keeping the beat or tempo of. .. human and digital actors, then we can build a team of improvisers that can generate a shared context, and eventually do a full performance If we assume that the digital actor has a way to communicate with the other actors (speech recognition and generation and simple sockets for digital actor to digital actor communication), the most challenging issue that remains is to generate novel associations and. .. surprisingly hard for new actors Our next step in building a digital improviser was creating a team that could participate in and create a compelling performance of the one word story game Given the complexity of the task, we chose to create a purely digital one word story performance Using the collaborative context created between digital and human actors in a pattern game, the goal is then for the digital. .. generation to story discovery The emergence of weblogs (blogs) and other types of user-generated content has transformed the internet from a professionally produced collection of sites into a vast compilation of personal thoughts, experiences, opinions and feelings The movement of the production of online news, stories, videos and images beyond the desks of major production companies to personal computers... feelings and experiences of many as opposed to merely professionally edited content While this collection of blogs is vast, only a subset of blog entries contains subjectively interesting stories Using what we know from previous story generation systems to inform story discovery, we define a stronger model for the aesthetic elements of stories and use that to drive retrieval of stories, and to filter and. .. topic and emotional impact of the story to simply qualities such as its length In addition to these qualities, Buzz was also encoded with a set of terms that appear frequently within 442 S.O Sood and A.V Vasilakos Table 1 A set of stories retrieved and presented by Buzz [11, 40] my husband and i got into a fight on saturday night; he was drinking and neglectful, and i was feeling tired and pregnant and. .. time of this writing) that only allow users to just start, stop, and pause video playback If supported at all, browsing and navigation functionality is often restricted to simple skipping of chapters via two single buttons for backward and forward navigation and a small and thus not very sensitive timeline slider One might assume that more advanced and complex browsing techniques are just not needed for. .. story outline and scene by specifying the locations and interactions of all 3D Live and virtual characters In order to enable the interaction of the audiences and the actors at different places, several cameras and microphone are put inside the Interactive Theatre Space to capture the images and voice of the audiences Those images and voice captured by the camera and microphone near the place of a 3D Live... Festival was positive and grounded in a larger community including professors and students of film, studio art, theater, and performance art The Association Engine’s pattern game backend was also exhibited as part of a larger installation in the 2004 PAC (Performing Arts Chicago) Edge Festival The installation was called Handle with Care: Direct Mail and the American Dream The leading artist of the piece was . (  ) Department of Theatre Studies, University of Peloponnese, 21100 Nafplio, Greece e-mail: vasilako@ath.forthnet.gr B. Furht (ed.), Handbook of Multimedia for Digital Entertainment and Arts, DOI. drive the performance, a substantial number of systems have been built in which the performance is driven by a set of digital actors. That is, a team of digital actors autonomously generates a performance,. actors. The challenge of generating novel and interesting performance content for digi- tal actors differs greatly by the type of performance or interaction at hand. In cases where the digital actor

Ngày đăng: 02/07/2014, 02:20

Từ khóa liên quan

Mục lục

  • 0387890238

  • Handbook of Multimedia for Digital Entertainment and Arts

  • Preface

  • Part I DIGITAL ENTERTAINMENT TECHNOLOGIES

    • 1 Personalized Movie Recommendation

      • Introduction

      • Background Theory

        • Recommender Systems

        • Collaborative Filtering

          • Data Collection -- Input Space

            • Neighbors Similarity Measurement

            • Neighbors Selection

            • Recommendations Generation

            • Content-based Filtering

            • Other Approaches

            • Comparing Recommendation Approaches

            • Hybrids

            • MoRe System Overview

            • Recommendation Algorithms

              • Pure Collaborative Filtering

              • Pure Content-Based Filtering

              • Hybrid Recommendation Methods

              • Experimental Evaluation

              • Conclusions and Future Research

              • 2 Cross-category Recommendation for Multimedia Content

                • Introduction

                • Technological Overview

                  • Overview

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan