Advances in integrations of intelligent methods, 1st ed , ioannis hatzilygeroudis, isidoros perikos, foteini grivokostopoulou, 2020 718

171 45 0
Advances in integrations of intelligent methods, 1st ed , ioannis hatzilygeroudis, isidoros perikos, foteini grivokostopoulou, 2020   718

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Smart Innovation, Systems and Technologies 170 Ioannis Hatzilygeroudis Isidoros Perikos Foteini Grivokostopoulou   Editors Advances in Integrations of Intelligent Methods Post-workshop volume of the 8th International Workshop CIMA 2018, Volos, Greece, November 2018 (in conjunction with IEEE ICTAI 2018) Smart Innovation, Systems and Technologies Volume 170 Series Editors Robert J Howlett, Bournemouth University and KES International, Shoreham-by-sea, UK Lakhmi C Jain, Faculty of Engineering and Information Technology, Centre for Artificial Intelligence, University of Technology Sydney, Sydney, NSW, Australia The Smart Innovation, Systems and Technologies book series encompasses the topics of knowledge, intelligence, innovation and sustainability The aim of the series is to make available a platform for the publication of books on all aspects of single and multi-disciplinary research on these themes in order to make the latest results available in a readily-accessible form Volumes on interdisciplinary research combining two or more of these areas is particularly sought The series covers systems and paradigms that employ knowledge and intelligence in a broad sense Its scope is systems having embedded knowledge and intelligence, which may be applied to the solution of world problems in industry, the environment and the community It also focusses on the knowledge-transfer methodologies and innovation strategies employed to make this happen effectively The combination of intelligent systems tools and a broad range of applications introduces a need for a synergy of disciplines from science, technology, business and the humanities The series will include conference proceedings, edited collections, monographs, handbooks, reference books, and other relevant types of book in areas of science and technology where smart systems and technologies can offer innovative solutions High quality content is an essential feature for all book proposals accepted for the series It is expected that editors of all accepted volumes will ensure that contributions are subjected to an appropriate level of reviewing process and adhere to KES quality principles ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, SCOPUS, Google Scholar and Springerlink ** More information about this series at http://www.springer.com/series/8767 Ioannis Hatzilygeroudis Isidoros Perikos Foteini Grivokostopoulou • • Editors Advances in Integrations of Intelligent Methods Post-workshop volume of the 8th International Workshop CIMA 2018, Volos, Greece, November 2018 (in conjunction with IEEE ICTAI 2018) 123 Editors Ioannis Hatzilygeroudis Department of Computer Engineering and Informatics, School of Engineering University of Patras Patras, Greece Isidoros Perikos Department of Computer Engineering and Informatics University of Patras Patras, Greece Foteini Grivokostopoulou Department of Computer Engineering and Informatics University of Patras Patras, Greece ISSN 2190-3018 ISSN 2190-3026 (electronic) Smart Innovation, Systems and Technologies ISBN 978-981-15-1917-8 ISBN 978-981-15-1918-5 (eBook) https://doi.org/10.1007/978-981-15-1918-5 © Springer Nature Singapore Pte Ltd 2020 This work is subject to copyright All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed The use of general descriptive names, registered names, trademarks, service marks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Preface The combination of different intelligent methods is a very active research area in artificial intelligence (AI) The aim is to create integrated or hybrid methods that benefit from each of their components It is generally believed that complex problems can be easily solved with such integrated or hybrid methods Some of the existing efforts combine what are called soft computing methods (fuzzy logic, neural networks, and evolutionary algorithms) either among themselves or with more traditional AI technologies such as logic and rules Another stream of efforts integrates case-based reasoning and machine learning with soft computing and traditional AI methods Yet another integrates agent-based approaches with logic and non-symbolic approaches Some of the combinations have been quite important and have been more extensively used, like neuro-symbolic methods, neuro-fuzzy methods, and methods combining rule-based and case-based reasoning However, there are other combinations that are still under investigation, such as those related to the semantic Web and big data areas as well as deep learning and swarm intelligence methods For example, the recently emerged deep learning architectures or methods are mostly hybrid by nature In some cases, integrations are based on first principles, creating hybrid models, whereas in other cases they are created in the context of solving problems leading to systems or applications Important topics of the above area are (but not limited to) the following: • • • • • • • • • • • Bayesian networks Case-based reasoning Deep learning Ensemble learning and ensemble methods Evolutionary algorithms Evolutionary neural systems Expert systems and knowledge-based systems Fuzzy evolutionary systems Hybrid approaches for the Web Hybrid knowledge representation methods and approaches Hybrid and distributed ontologies v vi • • • • • • • • • • • Preface Information fusion techniques Integrations of neural networks Intelligent agent integrations Integrations of statistical and symbolic AI approaches Intelligent agent integrations Machine learning combinations Neuro-fuzzy approaches/systems Reinforcement learning Semantic Web technologies integrations Swarm intelligence methods and integrations Applications – – – – – – – – – – – – – Agents and multi-agent systems Big data Biology, computational biology, and bioinformatics Decision support and recommender systems Economics, business, and forecasting applications Education and distance learning Industrial and engineering applications Medicine and health care Multimodal human–computer interaction Natural language processing and understanding Planning, scheduling, search, and optimization Robotics Social networks This volume includes extended and revised versions of some of the papers presented in the 8th International Workshop on Combinations of Intelligent Methods and Applications (CIMA 2018) and also papers submitted especially for this volume after a CFP CIMA 2018 was held in conjunction with the 30th IEEE International Conference on Tools with Artificial Intelligence (IEEE ICTAI 2018), November 5–7, 2018, Volos, Greece Papers went through a peer review process by the CIMA 2018 program committee members Chapter 1, by Ahmed Ewais, Mohammed Awad, and Khetam Hadia, presents a work on the use of hybrid systems for providing adaptation of learning materials for assessments It presents a concrete framework for delivering learning materials and generating assessments based on intended learning outcomes (ILOs) using both support vector machine (SVM) and fuzzy logic algorithms with quite interesting results Chapter 2, by Kostas Kolomvatsos and Christos Anagnostopoulos, addresses the problem of query allocation in cloud computing The chapter discusses the use of an ensemble similarity scheme responsible to deliver the complexity class for each query, to help in deciding allocation to an edge node The large number of simulations conducted show quite interesting results Chapter 3, by Pantelis Linardatos and Sotiris Kotsiantis, presents a work on the prediction of bitcoin prices The chapter introduces a machine learning model, which utilizes past Preface vii prices of bitcoin, Google Trends data, and custom features, and based on deep learning methods predicts future bitcoin process with quite high performance Chapter 4, by Mouna Ben Ishak, addresses the topic of statistical relational learning and examines metrics to evaluate a probabilistic relational model structure learning algorithm Chapter 5, by Iosif Mporas, Isidoros Perikos, and Michael Paraskevas, presents an architecture for the classification of pigmented skin lesions from dermatoscopic images The architecture utilizes AdaBoost with random forest classifiers and reports quite interesting performance Chapter 6, by Lorenzo Servadei and Jing Yang, describes how statistical analysis and machine learning are supporting the process of automated data generation in hardware and firmware design configuration Authors show how statistical analysis and machine learning can help in the correct learning of a mapping function to the register interface area in a certain constraint boundary and express useful metrics for pinpointing the validity and quality of the design settings Chapter 7, by Erich C Teppan and Giacomo Da Col, introduces a novel approach for automatically creating composite dispatching rules, i.e., heuristics for job sequencing, for makespan optimization in such large-scale job shops The approach builds on the combination of event-based simulation and genetic algorithms and results in quite interesting performance Chapter 8, by Bin Wang, Shutao Zhang, Hongxiang Xu, Zhizheng Zhang, Wei Wu, Chenglong He, and Shiqiang Zong, presents a logic formalism o-LPMLN that is created by introducing ordered disjunctions in LPMLN LPMLN provides a powerful framework to handle uncertainty and inconsistencies by combining the ideas of answer set programming and Markov logic networks (MLNs); logic programming with ordered disjunctions is a simple yet effective way to handle preferences in answer set programming We would like to express our appreciation to all the authors of submitted papers as well as to the members of CIMA 2018 program committee for their valuable review contributions We hope that this kind of post-proceedings will be useful to both researchers and developers Patras, Greece Ioannis Hatzilygeroudis Isidoros Perikos Foteini Grivokostopoulou Reviewers Ajith Abraham, Machine Intelligence Research Labs, MIR Labs Plamen Angelov, Lancaster University, UK Abdulrahman Atahhan, Coventry University, UK Nick Bassiliades, Department of Informatics, Aristotle University of Thessaloniki, Greece Maumita Bhattacharya, Charles Sturt University, Australia Nikolaos Bourbakis, Wright State University, USA Kit-Yan Chan, Curtin University, Australia Gloria Cerasela Crisan, University of Bacau, Romania Artur D’Avila Garcez, City University London, UK Georgios Dounias, University of the Aegean, Greece Wei Fang, Jiangnan University, China Foteini Grivokostopoulou, University of Patras, Greece Ioannis Hatzilygeroudis, University of Patras, Greece Andreas Holzinger, Graz University of Technology, Austria Constantinos Koutsojannis, University of Patras, Greece Rudolf Kruse, University of Magdeburg George Magoulas, Birkbeck College, UK Christos Makris, University of Patras, Greece Ashish Mani, Dayalbagh Educational Institute, Dayalbagh Antonio Moreno, Rovira i Virgili University, Spain Daniel C Neagu, University of Bradford, UK Muaz Niazi, Comsats Institute of IT, Islamabad, Pakistan Vasile Palade, Co-Chair, Coventry University, UK Isidoros Perikos, University of Patras, Greece Camelia Pintea, Technical University of Cluj-Napoca, Romania Jim Prentzas, Democritus University of Thrace, Greece Roozbeh Razavi-Far, University of Windsor, Canada ix x David Sanchez, Rovira i Virgili University, Spain Kyriakos Sgarbas, University Of Patras, Greece Jun Sun, Jiangnan University, China Juan Velasquez, University of Chile, Chile Douglas Vieira, Enacom-Handcrafted Technologies, Brazil Maria Virvou, Department of Informatics, University of Piraeus, Greece Reviewers o-LPMLN : A Combination of LPMLN and LPOD 147 Definition For an LPOD rule r , its i-th option (1 ≤ i ≤ o(r )), denoted by r i , is defined as (8.14) h i ← b+ (r ), not b− (r ), not h , , not h i−1 Definition A split program of an LPOD program P is obtained from P by replacing each rule in P o with one of its options Based on above definitions, the semantics of an LPOD program P is defined as follows A consistent set S of literals is a candidate stable model of P if it is a stable model of a split program of P By C S M(P) we denote the set of all candidate stable models of P The satisfaction degree deg(r, I ) of an interpretation I w.r.t a rule r with ordered disjunctions is defined as deg(r, I ) = 1, if b+ (r ) I or b− (r ) ∩ I = ∅; min{k | h k ∈ h(r ) ∩ I }, otherwise (8.15) And the satisfaction degree deg(P, I ) of an interpretation I w.r.t an LPOD program P is defined as the sum of satisfaction degrees of I w.r.t LPOD rules in P o , i.e deg(P, I ) = r ∈P o deg(r, I ) For a candidate stable model S of P, by S i (P) we denote the set of LPOD rules in P o that are satisfied by S at degree i Based on the notion of satisfaction degree, for two candidate stable model X and Y of P, Brewka introduces four preference criteria [8]: Cardinality-Preferred: X is cardinality-preferred to Y , denoted by X >c Y , if there is a positive integer i such that |X i (P)| > |Y i (P)|, and |X j (P)| = |Y j (P)| for all j < i; Inclusion-Preferred: X is inclusion-preferred to Y , denoted by X >i Y , if there is a positive integer i such that Y i (P) ⊂ X i (P), and X j (P) = Y j (P) for all j < i; Pareto-Preferred: X is pareto-preferred to Y , denoted by X > p Y , if there is a rule r ∈ Po such that deg(r, X ) < deg(r, Y ), and there does not exist a rule r ∈ Po such that deg(r, Y ) < deg(r, X ) Penalty-Sum-Preferred: X is penalty-sum-preferred to Y , denoted by X > ps Y , if deg(P, X ) < deg(P, Y ) For each pr ∈ {c, i, p, ps}, a candidate stable model X of P is a pr -preferred stable model if there is no candidate stable model Y of P such that Y > pr X 8.3 o-LPMLN In this section, we present a combination of LPMLN and LPOD, called o-LPMLN , which is expected to handle uncertainty, inconsistencies, and preferences in a unified framework 148 B Wang et al 8.3.1 Syntax of o-LPMLN Syntactically, an o-LPMLN program M consists of two parts: the regular part M r and the ordered disjunctive part M o , where M r is a regular LPMLN program, and M o consists of weighted ordered disjunctive rules (wo-rules for short) of the form (8.16) w : h × × h n ← b+ (r ), not b− (r ) (8.16) Same as LPMLN , a wo-rule w : r is called soft if w is a real number, and hard if w is α It is obvious that o-LPMLN can be regarded as a probabilistic extension of LPOD, just like LPMLN can be viewed as a probabilistic extension of ASP, therefore, the notations introduced in LPMLN and LPOD can be extended to o-LPMLN straightforwardly 8.3.2 Semantics of o-LPMLN Since o-LPMLN is a combination of LPMLN and LPOD, there are two ways to define the semantics of an o-LPMLN program From the perspective of LPMLN , a stable model of an LPMLN program does not have to satisfy all rules in the program For each stable model, there is a unique maximal subset of the program that can be satisfied by the stable model Likewise, we can define the stable models of an o-LPMLN program as follows Definition For an o-LPMLN program M, a consistent set X of literals is a candidate stable model of M, iff X is a candidate stable model of M X and M h ⊆ M X For an o-LPMLN program M, by C S M (M) we denote the set of all candidate stable models defined in Definition There are two key steps in Definition 3: – reducing an o-LPMLN program M to an LPOD program M X ; and – checking whether X is a candidate stable model of M X , which is the same as the method of defining semantics of LPMLN From the perspective of LPOD, a candidate stable model of an LPOD program is defined by its split programs Therefore, we define the split programs of an o-LPMLN program as follows Definition For a wo-rule w : r of the form (8.16), its i-th option (1 ≤ i ≤ o(r )), denoted by w : r i , is defined as w : h i ← b+ (r ), not b− (r ), not h , , not h i−1 (8.17) Definition For an o-LPMLN program M, a split program sp(M) of M is obtained by replacing each rule in M o with one of its options o-LPMLN : A Combination of LPMLN and LPOD 149 Based on above notions, we can define the candidate stable model of an o-LPMLN program Definition For an o-LPMLN program M, a consistent set X of literals is a candidate stable model of M, iff X is a stable model of a split program of M For an o-LPMLN program M, by C S M (M) we denote the set of all candidate stable models defined in Definition It is obvious that there are also two key steps in Definition 6: – reducing an o-LPMLN program M to an LPMLN program, i.e a split program of M; and – checking if X is a stable model of the split program, which coincides with the way to define candidate stable models of an LPOD program Intuitively, both of definitions of candidate stable models of an o-LPMLN program are reasonable, following result shows that they are also equivalent Lemma For an o-LPMLN program M, a set X of literals is a candidate stable model of M according to Definition 3, iff X is a candidate stable model of M according to Definition 6, i.e CSM1 (M) = CSM2 (M) Proof For an o-LPMLN program M and a consistent set X of literals, the proof is divided into two parts In Part 1, we show that if X ∈ CSM1 (M), then X ∈ CSM2 (M); and in Part 2, we show that if X ∈ CSM2 (M), then X ∈ CSM1 (M) Part Suppose X ∈ CSM1 (M), by Definition 3, X is a candidate stable model of M X , which means X is a stable model of sp(M X ) Let M = sp(M X ) ∪ sp(M − M X ), obviously, M is a split program of M, and M X = sp(M X ) By the definitions, it is easy to check that sp(M X ) = sp(M X ), therefore, X is a stable model of M , which means X ∈ C S M (M) Part Suppose X ∈ CSM2 (M), by Definition 6, X is a stable model of a split program sp(M) of M, which means X is a stable model of sp(M) X It is easy to check that sp(M) X is a split program of M X , therefore, X is a candidate stable model of M X , which means X ∈ CSM1 (M) Lemma shows that two definitions of candidate stable models of an o-LPMLN program are equivalent, therefore, we use C S M(M) to denote the set of all candidate stable models of an o-LPMLN program M Now, we define the weight, probability and satisfaction degrees of a candidate stable model For an o-LPMLN program M and a candidate stable model X of M, by Definition 3, the definitions of weight and probability degrees of a stable model of an LPMLN program can be extended to o-LPMLN straightforwardly Definition For an o-LPMLN program M and a candidate stable model X of M, the weight degree W (M, X ) of X w.r.t M is defined as Eq (8.10), and the probability degree P(M, X ) of X w.r.t M is defined as Eq (8.11) 150 B Wang et al Let us now turn to the definition of satisfaction degree of a candidate stable model of an o-LPMLN program, which is different from the case in LPOD Since a candidate stable model X of an o-LPMLN program M only satisfy the rules in M X , therefore, we extend Eq (8.15) as follows Definition For a wo-rule w : r and a consistent set X of literals, the satisfaction degree deg(w : r, X ) of X w.r.t w : r is defined as – if b+ (r ) X or b− (r ) ∩ X = ∅, then deg(w : r, X ) = 1; – if b+ (r ) ⊆ X , b− (r ) ∩ X = ∅, and h(r ) ∩ X = ∅, then deg(w : r, X ) = min{k | ck ∈ head(r ) ∩ I }; – if b+ (r ) ⊆ X , b− (r ) ∩ X = ∅, and h(r ) ∩ X = ∅, then deg(w : r, X ) = o(r ) + Intuitively, if a wo-rule w : r is satisfied by a candidate stable model X , then its satisfaction degree is defined the same as Equation (8.15), if w : r cannot be satisfied by X , then its satisfaction degree w.r.t X is o(r ) + We use following example to show the intuition of the definition Example Consider an o-LPMLN program M consisting of rule : r : prefer_os(android) × prefer_os(ios) (8.18) It is easy to check that M has three candidate stable models: X = {prefer_os(android)}, Y = {prefer_os(ios)}, and Z = ∅, where both X and Y satisfy rule : r in M, while Z does not Since X and Y satisfy more preference than Z , they should be better solutions than Z , i.e the satisfaction degrees of X and Y should be less than that of Z According to Definition 8, deg(1 : r, X ) = 1, deg(1 : r, Y ) = 2, and deg(1 : r, Z ) = 3, which coincides with our intuition Definition For an o-LPMLN program M and a candidate stable model X , the satisfaction degree deg(M, X ) of X w.r.t M is defined as deg(w : r, X ) deg(M, X ) = (8.19) w:r ∈M o Based on above definitions, all four kinds of preference criteria introduced in LPOD can be introduced in o-LPMLN straightforwardly In the definitions of o-LPMLN , weight degree and satisfaction degree of a stable model are two kinds of evaluations from LPMLN and LPOD respectively A stable model with higher weight degree means it satisfies more weighted rules than other stable models with lower weight degrees, which reflects the quantitative inference of o-LPMLN And a stable model with lower satisfaction degree means it provides more preferred options, which reflects the qualitative inference of o-LPMLN Therefore, the optimal stable models of an oLPMLN program are those with the highest weight degree and the most preferred under a kind of preference criterion For example, if we use penalty-sum preference criterion of LPOD, an optimal stable model of an o-LPMLN program M is the one that has the greatest weight degree and lowest satisfaction degree w.r.t M Now, we use an example to show the usefulness of o-LPMLN in handling uncertainty and preferences o-LPMLN : A Combination of LPMLN and LPOD 151 Example Continue our advisor system AS, suppose there are two kinds of smart phones p1 and p2, and we have some pieces of information about them: – p1 runs iOS system, and p2 runs Android system; – p1 has a big memory at weight degree 5, and has a small memory at weight degree 1; – p2 has a small memory at weight degree 5, and has a big memory at weight degree 1; these information can be encoded as follows phone( p1) os( p1, ios) : mem( p1, big) : mem( p1, small) (8.20) (8.21) phone( p2) os( p2, android) : mem( p2, big) : mem( p2, small) ← mem(P, M1), mem(P, M2), M1 = M2 (8.22) (8.23) (8.24) The last constraint in above program means the memory of a phone is either big or small John wants a phone with big memory, and he prefers phones with Android system, which is encoded as follows : prefer_os(android) × prefer_os(ios) : require_mem(big) (8.25) (8.26) The weight degree of rule (8.26) is greater than that of rule (8.25) To make a decision, we use following rules: sel( p1) ∨ sel( p2) (8.27) ← sel(X ), require_mem(M), not mem(X, M) (8.28) ← sel(X ), prefer_os(S), not os(X, S) (8.29) ← mem(X, M), not sel(X ) (8.30) Rules (8.27)–(8.30) means select only one phone in each stable model, and users’ requirements must be satisfied There are two candidate solutions satisfying all requirements, i.e X = {sel( p1), mem( p1, big), os( p1, ios)} and Y = {sel( p2), mem( p2, big), os( p2, android)} It is easy to check that W (M, X ) = e11 and W (M, Y ) = e7 , therefore, sel( p1) is the optimal decision, although John prefers Android system 152 B Wang et al 8.4 Implementing o-LPMLN The key part of implementing o-LPMLN is to compute the candidate stable models of an o-LPMLN program According to Definitions and 6, there exist two kinds of approaches to computing the candidate stable models In this section, we investigate the implementations of o-LPMLN based on two definitions of candidate stable models 8.4.1 Translating o-LPMLN into LPOD Just like an LPMLN program can be translated into an ASP program, an o-LPMLN can also be translated into an LPOD program, which is basically the same as the translation of LPMLN presented in [19] Definition 10 For an o-LPMLN program M, its LPOD translation τo (M) is obtained by replacing each soft rule wi : ri with following three rules: – unsat (i) ← b+ (ri ), not b− (ri ), not h(ri ) – h ∗ (ri ) ← b+ (ri ), not b− (ri ), not unsat(i) – :∼ unsat (i) [wi , i] where h ∗ (ri ) = h ∨ ∨ h |h(ri )| , if wi : ri is a normal LPMLN rule, and h ∗ (ri ) = h × × h |h(ri )| , if wi : ri is a wo-rule; and the third rule is a weak constraint that is used to evaluate the weight of a stable model [12] For an o-LPMLN program M and its LPOD translation τo (M), if X is a candidate stable model of τo (M), by the semantics of weak constraint, the penalty degree Pe(τo (M), X ) of X w.r.t τo (M) is Pe(τo (M), X ) = wi (8.31) unsat(i)∈X For a candidate stable model X of M, the set φ(X ) is defined as φ(X ) = X ∪ {unsat(i) | X |= wi : ri } (8.32) Theorem For an o-LPMLN program M and its LPOD translation τo (M), a set X of literals is a candidate stable model of M iff φ(X ) is a candidate stable model of τo (M) For a candidate stable model X of M, its weight degree W (M, X ) w.r.t M is W (M, X ) = W (M) × exp(−Pe(τo (M), φ(X ))) (8.33) its satisfaction degree deg(wi :ri , X ) w.r.t a wo-rule wi :ri is deg(wi :ri , X ) = deg(τo ({wi :ri }), φ(X )) if X |= wi :ri o(ri ) + otherwise (8.34) o-LPMLN : A Combination of LPMLN and LPOD 153 The proof of Theorem is similar to the corresponding results of LPMLN [20] Theorem shows that all inference results of an o-LPMLN program can be computed from its LPOD translation, which provides an approach to implementing o-LPMLN via using LPOD solvers Example Consider following o-LPMLN program M : a ← b : c × d (8.35) (8.36) its LPOD translation τo (M) is unsat(1) ← b, not a (8.37) a ← b, not unsat (1) : ∼ unsat(1).[2, 1] (8.38) (8.39) unsat(2) ← not c, not d c × d ← not unsat(2) (8.40) (8.41) : ∼ unsat(2).[3, 2] (8.42) 8.4.2 Translating o-LPMLN into LPMLN Now, we investigate the translation from o-LPMLN to LPMLN Firstly, we present following property of wo-rules, called satisfaction chain property Proposition Let w : r be a wo-rule in an o-LPMLN M, if X is a candidate stable model of M, then X |= w : r k for any deg(w : r, X ) ≤ k ≤ o(r ), where w : r k is the k-th option of w : r The satisfaction chain property shows the relationships between a candidate stable model and options of a wo-rule, which serves as the foundation of the translation from o-LPMLN to LPMLN Definition 11 For an o-LPMLN program M, its LPMLN translation τm (M) consists of three parts, i.e τm (M) = M r ∪ τm1 (M o ) ∪ τm2 (M o ), where M r is the regular part of M, and the other two parts of τm (M) are defined as follows – τm1 (M o ) = {w : sh(r ) | w : r ∈ M o }, where sh(r ) is a complete shift of r defined as follows (8.43) ← body(r ), not h , , not h o(r ) – τm2 (M o ) = {1 : r k | w : r ∈ M o , and ≤ k ≤ o(r )} 154 B Wang et al Theorem For an o-LPMLN program M and its LPMLN translation τm (M), a consistent set X of literals is a candidate stable model of M iff it is a stable model of τm (M) For a candidate stable model X of M, its weight degree W (M, X ) w.r.t M is (8.44) W (M, X ) = W (M r ∪ τm1 (M o ), X ) and its satisfaction degree deg(w : r, X ) w.r.t a wo-rule w : r is deg(w : r, X ) = o(r ) + − ln(W (τm2 (M o ), X )) (8.45) Proof The proof is divided into two parts, in the first part, we show that if X ∈ C S M(M) then X ∈ S M(τm (M)); in the second part, we show that if X ∈ S M(τm (M)) then X ∈ C S M(M) For an o-LPMLN program M, let M ∗ be a split program of M For each wo-rule w : r ∈ M o , let w : r ∗ be the corresponding LPMLN rule in M ∗ , and w : sh(r ) be the corresponding rule in τm1 (M o ) Here, we omit the proof about weight degree and satisfaction degree, which is straightforward according to following proof Part 1: For the split program M ∗ , without loss of generality, suppose X is a stable model of M ∗ By the definition of split program, we have M X∗ ⊆ τm (M) X Assume there is a proper subset X of X such that X |= τm (M) X , by above results, we have X |= M X∗ , which contradicts with the premise that X is a stable model of M ∗ Hence, X is a stable model of τm (M) Part 2: Suppose X is a stable model of τm (M) By the definition, it is obvious that τ (M) X = M Xr ∪ τm1 (M o ) X ∪ {1 : r k ∈ τm2 (M o ) | deg(w : r, X ) ≤ k ≤ o(r )} Let M(X ) be a split program of M w.r.t X , M(X ) is constructed as follows – add all rules in M r to M(X ); – for each wo-rule w : r ∈ M o such that deg(w : r, X ) ≤ o(r ), add w : r k to M(X ), where k = deg(w : r, X ); – for each wo-rule w : r ∈ M o such that deg(w : r, X ) > o(r ), add w : r k to M(X ), where k = o(r ) It is clear that M(X ) is a split program of M Assume there is a proper subset X of X such that X |= M(X ) X By Proposition 1, it is clear that X |= τm2 (M) X By the definitions of split program, we have X |= M Xr and X |= τm1 (M Xo ), which means X |= τm (M) X It contradicts with X ∈ S M(τm (M)), hence, we can infer that X ∈ C S M(M) Combining above results, we have shown that C S M(M) = S M(τm (M)), Theorem is proven Theorem directly implies an approach to computing o-LPMLN s via translating it into LPMLN programs and using existing LPMLN solvers, such as LPMLN2ASP [19], LPMLN-Models [26] etc Among different preference criteria of LPOD, the penaltysum criterion especially relates to the weight of a stable model defined in LPMLN Therefore, we have a direct corollary of Theorem 2, which is shown in Corollary o-LPMLN : A Combination of LPMLN and LPOD 155 Corollary For an o-LPMLN program M and its LPMLN translation τm (M), X is a ps-preferred stable model of, M iff X ∈ S M(τm (M)), and there does not exist a stable model Y ∈ S M(τm (M)) such that W (τ2 (M o ), Y ) > W (τ2 (M o ), X ) Example Let M be the o-LPMLN program M in Example 3, its LPMLN translation τm (M) is : a ← b :← not c, not d : c (8.46) (8.47) (8.48) : d ← not c (8.49) 8.4.3 Discussion In this section, we present two translations of o-LPMLN , both of them are modular and linear-time constructible, which provides two approaches to implementing o-LPMLN via existing LPOD and LPMLN solvers [11, 19, 26] The translations also show that the computational complexity of o-LPMLN is the same as that of LPOD and LPMLN , and our extension does not increase the computational complexity It is easy to observe that our approach presented here can also be used to compute LPODs, since an LPOD can be viewed as an o-LPMLN program without soft rules For the implementation of LPOD, Lee and Yang present an “almost” modular translation from LPOD to ASP by introducing some auxiliary atoms [23], while our translation from LPOD to LPMLN does not introduce any new atoms, and it is completely modular and linear-time constructible All of the other implementations of LPOD make iterative calls of ASP solvers to find preferred stable models [9, 11], while our translation only needs to call LPMLN solvers one time In addition, it is worth noting that although LPOD rules can be encoded in LPMLN , our extension of LPMLN cannot only be seen as a syntactic sugar On the one hand, the ordered disjunction provides a way to express preferences in LPMLN , which is different from uncertain and inconsistent knowledge essentially On the other hand, o-LPMLN introduces new criteria to evaluate stable models of an LPMLN program, which enriches the expressivity of LPMLN 8.5 A Prototype Application: The System AS In this section, we complete our advisor system AS, which is a prototype to show the knowledge representation and reasoning in o-LPMLN , and to test our implementation of o-LPMLN 156 B Wang et al Fig 8.1 The architecture of system AS 8.5.1 System Description Figure 8.1 shows the architecture of the system AS, which consists of three standard components of a knowledge-based system Users’ queries are encoded as the form of o-LPMLN and entered into the system through the user interface, and the results of queries are returned to the interface The inference engine of the AS system is an oLPMLN reasoning engine, which can be implemented via the translations presented in this paper For an inputted o-LPMLN program, a rewriter translates it into an LPMLN or LPOD program, and a corresponding solver is called to compute inference results For the knowledge base, there are three main components in the system, i.e knowledge bases of decision strategies, specifications of smart phones, and vague concepts Our smart phone data are from kaggle dataset GSMArena Mobile Devices,2 and we only use data of phones manufactured after 2017, which contains about 1100 items Vague Concepts In our data set of smart phones, only exact parameters of phones are collected, which is quantitative information, while the corresponding qualitative information is also needed in the application For example, John claims that he wants a phone with a big random access memory (RAM), where “big RAM” is a vague concept, since we only know the RAM of a phone is “4GB” or anything else To answer such kinds of queries, we define the mapping from quantitative information to qualitative information For example, Table 8.1 provides a mapping from the sizes of RAM to the qualitative descriptions of RAM, and each number in the table denotes a weight degree of the corresponding mapping, which is a weight distribution of the mapping actually The table can be encoded in LPMLN rules, for example, rule (8.50) https://www.kaggle.com/msainani/gsmarena-mobile-devices o-LPMLN : A Combination of LPMLN and LPOD Table 8.1 Mapping of the size of RAM < GB Big Medium Small 157 GB ∼ GB > 4GB 5 1 means if the RAM is greater than GB, then it can be viewed as a big RAM at weight degree 5 : ram(I tem, big) ← ram(I tem, M), M > 4G B (8.50) Note that the weight degrees in Table 8.1 are assigned manually for brevity Actually, these weights can be learned from data automatically in practical applications Lee and Wang have presented some works on weight learning in LPMLN [21], which can be used in our applications Besides, there are other vague concepts such as weight of a phone, size of display screen, price etc., all of them can be handled in similar way Decision Strategies In the area of qualitative decision making, there are two kinds of decision strategies, i.e pessimistic and optimistic strategies, which can be extended to our system AS Usually, a qualitative decision making problem can be viewed as tuple K , D, G [14], where K is a knowledge base, D is the decisions that we are going to make, and G is our decision goals For example, the claim “John wants a phone with a big memory and Android system” is our decision goals, the datasets of phones are our knowledge base, and which phone is the best is the decision that we are going to make For the pessimistic strategy, it requires that all of our goals must be satisfied, that is K ∪ D |= G (8.51) For the optimistic strategy, it requires that there are no conflicts between our decisions and goals, that is K ∪ D ∪ G is consistent (8.52) Obviously, the system AS can recommend more phones by using an optimistic strategy, and it would recommend more suitable phones by using a pessimistic strategy Recall Example 2, it can be observed that the pessimistic strategy is used in the decision Since users’ requirements are encoded as uncertain rules, which means a solution may not satisfy all requirements In particular, if users’ requirements are contrary, the optimistic strategy is not available For example, if John wants an iPhone running Android system, obviously, there does not exist an option satisfying all requirements 158 B Wang et al 8.5.2 Experiments In our experiments, all cases were carried out on a Dell PowerEdge R920 server with an Intel Xeon E7-4820@2.00GHz with 32 cores and GB RAM running the Ubuntu 16.04 operating system The running times of queries are not presented here for two main reasons On the one hand, we only provide a preliminary implementation of o-LPMLN in this paper, the running time could be improved by enhancing o-LPMLN solvers On the other hand, all data and decision rules are stored as text files, it will take plenty of time to load and parse these files, which would be optimized by caching parsed data Therefore, our experiments are only used to show the effects of knowledge representation and reasoning of o-LPMLN , which are shown as follows Query 1: John wants a phone with medium RAM, big internal memory, and medium display screen, which is encoded as follows: : require_ram(medium) : require_internal_mem(big) (8.53) (8.54) : require_display_size(big) (8.55) Output: The inference engine outputs 40 items that satisfy all the requirements of Query 1, where 32 items use Android system, and others use iOS system Query 2: John wants a phone with medium RAM, big internal memory, and medium display screen, which is the same as the requirements of Query Meanwhile, John prefers the smart phones newly released, which can be encoded as follows: prefer_release(2019) × prefer_release(2018) × prefer_release(2017) (8.56) Output: The inference engine outputs items including items running iOS and items running Android, which are listed as follows Apple, iPad Air / mini(2019) Archos, Diamond Energizer, Ultimate U 650S Huawei, Honor 10i / Enjoy 9s Lenovo, K Enjoy It can be observed that iPad Air and mini are not usually viewed as smart phones, therefore, to answer the query, we also need to construct an ontology that describe the classes of items For example, we can add following triples to our knowledge base o-LPMLN : A Combination of LPMLN and LPOD 159 iPad-Air isa tablet-PC (8.57) iPad-mini isa tablet-PC tablet-PC disjointWith smart-phone (8.58) (8.59) Besides, subclass relation is needed in our system For example, John wants a blue phone, but the colors of many items are “gradient blue” and “starlight blue” etc., which can be handled by introducing following subclass relations among colors: gradient-blue subclassOf blue (8.60) starlight-blue subclassOf blue (8.61) Therefore, integrating rule-based methods and ontology-based methods are important for a knowledge-based system Query John wants a phone with medium RAM, big internal memory, and medium display screen, which is the same as the requirements of Query Meanwhile, John prefers inexpensive smart phones, which can be encoded as follows: prefer_price(low) × prefer_price(medium) × prefer_price(high) (8.62) Output: The inference engine outputs items that are listed as follows Energizer, Ultimate U 650S Huawei, Enjoy 9s Lenovo, K Enjoy / S5 Xiaomi, Mi Max Above cases shows the expressivity of o-LPMLN in handling uncertainty and preferences Meanwhile, Query shows that a good knowledge representation and reasoning system should integrate rule-based methods and ontology methods, which would be a next work of the paper 8.6 Conclusion and Future Work In this paper, we present an alternative knowledge representation and reasoning tool for handling inconsistencies, uncertainty, and preferences in a unified framework, which is an extension of LPMLN by introducing ordered disjunctions, called o-LPMLN Our contributions are as follows Firstly, we present the syntax and semantics of language o-LPMLN Secondly, we present two translations from an o-LPMLN program to regular an LPMLN program and an LPOD program respectively Using existing LPMLN and LPOD solvers, the translations provide some methods to implement o-LPMLN As a by-product, the translation from o-LPMLN to LPMLN also provides a 160 B Wang et al one-shot approach to implementing LPOD In the end, we use o-LPMLN in a prototype application, i.e the shopping assistant AS, and the preliminary results show that oLPMLN is an alternative approach to handle uncertainty and preferences in real-world applications For the future, we plan to develop an efficient o-LPMLN solver Besides, we plan to further study our shopping assistant AS, and test our implementations of o-LPMLN in more real-world applications Besides, since LPOD is an earlier work on representing preferences in ASP, there have been several formalisms presented after it, such as Answer Set Optimization [7], meta-programming technique [16] etc Investigating the combinations of these formalisms and LPMLN could be a next work of the paper Acknowledgements We are grateful to the anonymous referees for their useful comments The work was supported by the National Key Research and Development Plan of China (Grant No 2017YFB1002801) References Balai, E., Gelfond, M.: On the relationship between P-log and LPMLN In: Kambhampati, S (ed.) Proceedings of the 25th International Joint Conference on Artificial Intelligence, pp 915–921 (2016) Balai, E., Gelfond, M., Zhang, Y.: Towards answer set programming with sorts In: Proceedings of the 12th International conference on Logic Programming and Nonmonotonic Reasoning, pp 135–147 (2013) Baral, C.: Knowledge Representation, Reasoning and Declarative Problem Solving Cambridge University Press, Cambridge, MA (2003) Baral, C., Gelfond, M., Rushton, N.: Probabilistic reasoning with answer sets Theory Pract Logic Program 9(1), 57–144 (2009) Becerra-Fernandez, I., Sabherwal, R.: Knowledge Management Systems and Processes M.E Sharpe, London (2010) Brewka, G.: Logic programming with ordered disjunction In: Proceedings of the 9th International Workshop on Non-Monotonic Reasoning, pp 100–105 (2002) Brewka, G.: Answer sets: from constraint programming towards qualitative optimization In: Proceedings of the 7th Logic Programming and Nonmonotonic Reasoning, vol 2923, pp 34–46 Fort Lauderdale, USA (2004) Brewka, G.: Preferences in answer set programming In: Proceedings of the 11th Conference of the Spanish Association for Artificial Intelligence on Current Topics in Artificial Intelligence, pp 1–10 (2005) Brewka, G., Delgrande, J.P., Romero, J.: asprin: Customizing Answer Set Preferences without a Headache In: Proceedings of the 29th AAAI Conference on Artificial Intelligence, pp 1467– 1474 (2015) 10 Brewka, G., Eiter, T., Truszczy´nski, M.: Answer set programming at a glance Commun ACM 54(12), 92–103 (2011) 11 Brewka, G., Niemelä, I., Syrjänen, T.: Implementing ordered disjunction using answer set solvers for normal programs In: Proceedings of the 8th European Conference On Logics In Artificial Intelligence, pp 444–456 Cosenza, Italy (2002) 12 Calimeri, F., Faber, W., Gebser, M., Ianni, G., Kaminski, R., Krennwallner, T., Leone, N., Ricca, F., Schaub, T.: ASP-Core-2 Input language format ASP Standardization Working Group (2012) 13 Confalonieri, R., Nieves, J.C., Osorio, M., Vázquez-Salceda, J.: Dealing with explicit preferences and uncertainty in answer set programming Ann Math Artif Intell 65(2–3), 159–198 (2012) o-LPMLN : A Combination of LPMLN and LPOD 161 14 Confalonieri, R., Prade, H.: Using possibilistic logic for modeling qualitative decision: answer set programming algorithms Int J Approximate Reasoning 55(2), 711–738 (2014) 15 De Raedt, L., Kimmig, A., Toivonen, H.: ProbLog: A probabilistic prolog and its application in link discovery In: Veloso, M.M (ed.) Proceedings of the 20th International Joint Conference on Artificial Intelligence, pp 2468–2473 (2007) 16 Gebser, M., Kaminski, R., Schaub, T.: Complex optimization in answer set programming Theory Pract Logic Program 11(4–5), 821–839 (2011) 17 Gelfond, M., Kahl, Y.: Knowledge Representation, Reasoning, and the Design of Intelligent Agents Cambridge University Press, Cambridge, MA (2014) 18 Gelfond, M., Lifschitz, V.: The Stable Model Semantics for Logic Programming In: Kowalski, R.A., Bowen, K.A (eds.) Proceedings of the Fifth International Conference and Symposium on Logic Programming, pp 1070–1080 MIT Press, Cambridge, MA (1988) 19 Lee, J., Talsania, S., Wang, Y.: Computing LPMLN using ASP and MLN solvers Theory Pract Logic Program 17(5–6), 942–960 (2017) 20 Lee, J., Wang, Y.: Weighted rules under the stable model semantics In: Baral, C., Delgrande, J.P., Wolter, F (eds.) Proceedings of the Fifteenth International Conference on Principles of Knowledge Representation and Reasoning, pp 145–154 AAAI Press (2016) 21 Lee, J., Wang, Y.: Weight learning in a probabilistic extension of answer set programs In: Proceedings of the 16th International Conference on the Principles of Knowledge Representation and Reasoning, pp 22–31 (2018) 22 Lee, J., Yang, Z.: LPMLN , Weak Constraints, and P-log In: Singh, S.P., Markovitch, S (eds.) Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pp 1170–1177 AAAI Press (2017) 23 Lee, J., Yang, Z.: Translating LPOD and CR-Prolog2 into standard answer set programs Theory Pract Logic Program 18(3–4), 589–606 (2018) 24 Nicolas, P., Garcia, L., Stéphan, I., Lefèvre, C.: Possibilistic uncertainty handling for answer set programming Ann Math Artif Intell 47(1–2), 139–181 (2006) 25 Richardson, M., Domingos, P.: Markov logic networks Mach Learn 62(1–2), 107–136 (2006) 26 Wang, B., Zhang, Z.: A Parallel LPMLN solver: primary report In: Bogaerts, B., Harrison, A (eds.) Proceedings of the 10th Workshop on Answer Set Programming and Other Computing Paradigms, pp 1–14 CEUR-WS, Espoo, Finland (2017) ... Preface Information fusion techniques Integrations of neural networks Intelligent agent integrations Integrations of statistical and symbolic AI approaches Intelligent agent integrations Machine... Systems and Technologies ISBN 97 8-9 8 1-1 5-1 91 7-8 ISBN 97 8-9 8 1-1 5-1 91 8-5 (eBook) https://doi.org/10.1007/97 8-9 8 1-1 5-1 91 8-5 © Springer Nature Singapore Pte Ltd 2020 This work is subject to copyright... Ioannis Hatzilygeroudis Department of Computer Engineering and Informatics, School of Engineering University of Patras Patras, Greece Isidoros Perikos Department of Computer Engineering and Informatics

Ngày đăng: 08/05/2020, 06:42

Từ khóa liên quan

Mục lục

  • Preface

  • Reviewers

  • Contents

  • About the Editors

  • 1 Aligning Learning Materials and Assessment with Course Learning Outcomes in MOOCs Using Data Mining Techniques

    • 1.1 Introduction

    • 1.2 Related Work

      • 1.2.1 ILOs and Assessments

      • 1.2.2 Data Mining Techniques

      • 1.2.3 Adaptation in MOOCs

      • 1.3 Model Construction

        • 1.3.1 Research Method

        • 1.3.2 Hybrid Adaptation Framework

        • 1.3.3 Validation

        • 1.4 Experiment and Results

          • 1.4.1 Matching Learning Materials with ILOs

          • 1.4.2 Generating Examination

          • 1.5 Conclusion

          • References

          • 2 Edge-Centric Queries' Stream Management Based on an Ensemble Model

            • 2.1 Introduction

            • 2.2 Prior Work

            • 2.3 Problem Definition and High-Level Description

              • 2.3.1 Data Processing at the Edge of the Network

              • 2.3.2 Matching Queries and Processors

              • 2.3.3 Delivering the Query Complexity Class

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan