1. Trang chủ
  2. » Luận Văn - Báo Cáo

A Learning Method based on Bisimulation in Inconsistent Knowledge Systems44981

6 7 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV) Singapore, November 18-21, 2018 A Learning Method based on Bisimulation in Inconsistent Knowledge Systems Thi Hong Khanh Nguyen, Quang-Thuy Ha and Trong Hieu Tran individuals represented by single and binary predicates In DL language, predicates such as concept names and role names, respectively Domains can be described by different sources For instance, individuals may be some objects in an area on earth and sources of information may be computer systems of different satellites objects described by some boolean attributes and binary relationships between them Another example is the following: some banks cooperate to share information about their customers to a certain extent; the bank’s customers are individuals; atomic concepts can be credibility, wealth, and financial discipline; Atomic roles can be some relationships based on the transformation Sources may provide consistent or inconsistent assertions Based on them, the paraconsistent interpretation can be generated as an integrated information system Concept learning in description logics has been studied by many researchers and is divided into three main approaches The first approach focuses on the ability to learn in description logics and builds some simple algorithms [24], [18], [7], [11], [19] The second approach studies the learning concept in the description logic using refinement operators [6], [5], [17], [20], [2] Lehman and el [29] employed integrating refinement operators in terminological decision trees Third approach exploits bisimulation for concept learning problems in description logics [10], [27], [30], [9] In [19] Lambrix and Larochia proposed a simple concept learning algorithm based on the concept Lehmann and Hitzler [20], Badea and Nienhys [2], Iannone et al [17] studied concept learning in DLs by using refinement operators as in inductive logic programming Apart from refinement operators, scoring functions, and search strategies also play important roles in the algorithms proposed in those works Nguyen and Szalas [26] applied bisimulation in DLs to model indiscernibility of objects Their work is pioneering in using bisimulation for concept learning in DLs [27], [30], [9] The authors propose a general method for smoothing the domain of interpretation and through which to build the concept to be learned Unlike all previous work handling consistent knowledge Abstract— Inconsistencies may naturally occur in the considered application domains in Artificial Intelligence, for example as a result of data mining works in distributed sources In order to solve inconsistent knowledge, several paraconsistent description logics have been proposed In this paper, we face the problem of concept learning for an inconsistent knowledge base system based on bisimulation This algorithm allows learning a concept from a training information system in a paraconsistent descriptive logic system with a set of positive items, negative items, and inconsistent items Here, we present a system for learning concept in an inconsistent knowledge base and discuss preliminary experimental results obtained in the electronic application domain I INTRODUCTION Description logics (DLs) are a family of formal languages which are well suited for representation and reasoning in a domain of particular interest DLs is of particular importance in providing theoretical models for semantic systems It is the basis for building languages for modeling ontologies in which OWL is a language that is recommended by the W3C International Standard for use in Semantic Web systems [14], [21] Description logics have usually been considered as syntactic variants of restricted versions of classical firstorder logic [1] On the other hand, in Semantic Web and multiagent applications, knowledge fusion frequently leads to inconsistencies [13] A way to deal with inconsistencies is to follow the area of paraconsistent reasoning [22], [21], [15], [16], [4], [28], [8], [31] Concept learning in DLs is similar to binary classification in traditional machine learning The differences are that in DLs objects are described not only by attributes but also by the relationships between the objects As bisimulation is the notion for characterizing indiscernibility of objects in DLs It is very useful for concept learning in these DLs [27], [30], [26], [9], [12] Consider a domain with Thi Hong Khanh Nguyen, Quang-Thuy Ha and Trong Hieu Tran are with Faculty of Information Technology, University of Engineering and Technology, Vietnam National University, Hanoi, Vietnam Thi Hong Khanh Nguyen is with Electricity Power University, Vietnam khanhnth@epu.edu.vn,thuyhq@vnu.edu.vn, hieutt@vnu.edu.vn 978-1-5386-9582-1/18/$31.00 ©2018 IEEE 430 bases In this paper, we develop a bisimulation based method for inconsistent knowledge systems, for concept learning in paraconsistent DLs • • The rest of this paper is structured as follows In Section 2, we present notation and semantics of the paraconsistent DLs considered in this paper We recall bisimilarity for paraconsistent description logics in Section A learning algorithm based on bisimulation is presented in Section In Section 5, we evaluate this algorithm by means of our implementation Finally, in Section we summarize our work and draw conclusions • In the case sC = 2, the truth values of assertions of the form A(x) are t (true) and f (false) In the case sC = 3, the third truth value is i (inconsistent) In the case sC = 4, the additional truth value is u (unknown) When sC = 3, one can identify inconsistency with the lack of knowledge, and the third value i can be read either as inconsistent or as unknown Similar explanations can be stated for sR In this paper, the problem is handled on three-valued logic Therefore, the set of considered paraconsistent semantics is as follows: II PRELIMINARIES Following the recommendation of W3C for OWL, like [25], [3], [23] we use the traditional syntax of DLs and only change its semantics to cover paraconsistency In this work, we consider a DL-signature is a set Σ = I∪C∪R, where: a finite set C of concept names, a countable set I of individual names, a countable set R of role names We use uppercase letters like A, B to denote concept names, lowercase letters like r, s to denote role names, and lowercase letters like a, b to denote individual names Let Φ be a set of features among I (inverse roles), O (nominal), Q (qualified number restrictions), U (the universal role) and Self (local reflexivity of a role) Recall that, using the traditional semantics, every query is a logical consequence of an inconsistent knowledge base A knowledge base may be inconsistent, for instance, when it contains both individual assertions A(a) and ¬A(a) for some A ∈ C and a ∈ I Paraconsistent reasoning is inconsistencytolerant and aims to derive meaningful logical consequences even when the knowledge base is inconsistent This problem can be handled by three-valued logic (t: true, f: false and i: inconsistent) The general approach is to define semantics s such that, given a knowledge base KB , the set Cons s (KB ) of logical consequences of KB w.r.t semantics s is a subset of the set Cons(KB ) of logical consequences of KB w.r.t the traditional semantics, with the property, that Cons s (KB ) contains mainly only meaningful logical consequences of KB and Cons s (KB ) approximates Cons(KB ) as much as possible That is, it is to weaken the traditional semantics in an appropriate way S = {2, 3} × {2, 3} × {+, ±} × {w, m, s} For s ∈ S, an s-interpretation is a pair I = ∆I , ·I , where ∆I is a non-empty set, called the domain, ·I is the interpretation function, which maps every individual name a to an element aI ∈ ∆I , every concept name A to a pair AI = AI+ , AI− of subsets of ∆I , and every role name r to I I of binary relations on ∆I such that: , r− a pair rI = r+ • if sC = then AI+ = ∆I \ AI− • if sC = then AI+ ∪ AI− = ∆I • I I if sR = then r+ = (∆I × ∆I ) \ r− • I I = ∆I × ∆I ∪ r− if sR = then r+ The intuition behind AI = AI+ , AI− is that AI+ gathers positive items about A, while AI− gathers negative items about A Thus, AI can be treated as the function from ∆I to {t, f, i} defined below:  I  / AI−  t for x ∈ A+ and x ∈  f for x ∈ AI− and x ∈ / AI+ AI (x) = (1)    i for x ∈ AI and x ∈ AI + I − Informally, A (x) can be thought of as the truth value of x ∈ AI Note that AI (x) ∈ {t, f} if sC = 2, and AI (x) ∈ I I {t, f, i} if sC = The intuition behind rI = r+ , r− is I similar, and under which r (x, y) ∈ {t, f} if sR = 2, and rI (x, y) ∈ {t, f, i} if sR = The interpretation function ·I maps a role R to a pair I I R I = R+ , R− , defined for the case R ∈ / R as follows: Paraconsistent semantics from the defined family, let’s say s, is characterized by four parameters, denoted by sC , sR , s∀∃Q , sGCI , with the following intuitive meanings: • sR ∈ {2, 3} specifies the number of possible truth values of assertions of the form r(x, y), where r ∈ R s∀∃Q ∈ {+, ±} specifies which from two considered semantics is used for concepts of the form ∀R.C, ∃R.C, ≤ n R.C or ≥ n R.C sGCI ∈ {w, m, s} specifies one of the three semantics for general concept inclusions: weak (w), moderate (m), strong (s) sC ∈ {2, 3} specifies the number of possible truth values of assertions of the form A(x), where A ∈ C 431 (r− )I = I −1 I −1 (r+ ) , (r− ) UI = ∆ I × ∆I , ∅ Function ·I maps a complex concept C to a pair C I = I I of subsets of ∆I defined in [27] as follows: C+ , C− I = ∆I , ∅ ⊥I = ∅, ∆I ({a})I = {aI }, ∆I \ {aI } (¬C)I = I I C− , C+ (C D)I = I I I I C+ ∩ D+ , C− ∪ D− (C D)I = I I I I C+ ∪ D+ , C− ∩ D− (∃R.Self)I = I {x ∈ ∆I | (x, x) ∈ R+ }, I I {x ∈ ∆ | (x, x) ∈ R− } ; Let I={Cellphone, Bluetooth, Laptop, Memory, Size, Weight} C={Device, General} R={hasGeneral} A={General(Memory), General(Size), General(Weight), General(Bluetooth), Device(Cellphone), Device(Laptop), Device(Memory), hasGeneral(Cellphone,Size), hasGeneral(Cellphone,Memory), hasGeneral(Cellphone,Bluetooth), hasGeneral(Laptop,Size), hasGeneral(Cellphone,Weight), hasGeneral(Cellphone,Memory) } T ={Device = ∃hasGeneral } This knowledge base is inconsistent because both concepts Device and General contain the object Memory An interpretation of the inconsistent knowledge base is as follows: ∆I = {a, b, c, d, e, f }, CellphoneI = a, BluetoothI = b, LaptopI = c, M emory I = d, SizeI = e, W eightI = f DeviceI = {a, c, d}, GeneralI = {b, d, e, f } if s∀∃Q = + then (∃R.C)I = I I {x ∈ ∆I | ∃y((x, y) ∈ R+ ∧ y ∈ C+ )}, I I {x ∈ ∆I | ∀y((x, y) ∈ R+ → y ∈ C− )} (∀R.C)I = I I {x ∈ ∆I | ∀y((x, y) ∈ R+ → y ∈ C+ )}, I I )} ; ∧ y ∈ C− {x ∈ ∆I | ∃y((x, y) ∈ R+ III BISIMULATION Bisimulation is as a binary relation between nodes of a labeled in a graph We will demonstrate how to modify and extend bisimulation to deal with richer logic languages Bisimulation is of interest to researchers and it is applied in practice in which three main applications are mentioned: (i) Separating the expressive powers of logic languages; (ii) Minimizing interpretations and labeled state transition systems; (iii) Concept learning in description logics In this section, we consider an implementation of bisimilarity for concept learning in description logics when inconsistencies occur Bisimilation is applied for Description Logics in concept learning problems with consistent knowledge [10], [30], [12], [27] The idea is to use models of KB and bisimularity in this model to guide the search for concept C Let Φ ⊆ {I, O, Q, U, Self} be a set of features, s ∈ S a paraconsistent semantics, and I, I s-interpretations A non-empty binary relation Z ⊆ ∆I × ∆I is called a (Φ, s)bisimulation between I and I if the following conditions hold for every a ∈ ΣI , x, y ∈ ∆I , x , y ∈ ∆I , A ∈ C, r ∈ R and every role R of ALC Φ different from U : (≥ n R.C)I = I I {x ∈ ∆I | #{y | (x, y) ∈ R+ ∧ y ∈ C+ } ≥ n}, I I } < n} ∧y ∈ / C− {x ∈ ∆I | #{y | (x, y) ∈ R+ (≤ n R.C)I = I I } ≤ n}, ∧y ∈ / C− {x ∈ ∆I | #{y | (x, y) ∈ R+ I I } > n} ; ∧ y ∈ C+ {x ∈ ∆I | #{y | (x, y) ∈ R+ if s∀∃Q = ± then (∃R.C)I = I I )}, ∧ y ∈ C+ {x ∈ ∆I | ∃y((x, y) ∈ R+ I I )} → y ∈ C− {x ∈ ∆I | ∀y((x, y) ∈ / R− (∀R.C)I = I I )}, {x ∈ ∆I | ∀y((x, y) ∈ / R− → y ∈ C+ I I {x ∈ ∆I | ∃y((x, y) ∈ R+ ∧ y ∈ C− )} ; (≥ n R.C)I = I I {x ∈ ∆I | #{y | (x, y) ∈ R+ ∧ y ∈ C+ } ≥ n}, I I {x ∈ ∆I | #{y | (x, y) ∈ / R− ∧y ∈ / C− } < n} (≤ n R.C)I = I I {x ∈ ∆I | #{y | (x, y) ∈ / R− ∧y ∈ / nC− } ≤ n}, (1) (2) (3) (4) I I {x ∈ ∆I | #{y | (x, y) ∈ R+ ∧ y ∈ C+ } > n} I We denote Γ is a set of concepts, ΓI+ = {C+ | C ∈ Γ}, I I I I I Γ− = {C− | C ∈ Γ} and Γ = Γ+ , Γ− Observe that, if Γ is finite, then ΓI = (Γ)I Example 1: An inconsistent knowledge system in L refers to electronic devices: (5) 432 Z(aI , aI ) Z(x, x ) ⇒ [AI+ (x) ⇒ AI+ (x )] Z(x, x ) ⇒ [AI− (x) ⇒ AI− (x )] I [Z(x, x ) ∧ R+ (x, y)] ⇒ ∃y ∈ ∆I [Z(y, y ) ∧ I R+ (x , y )], if s∀∃Q = + then I [Z(x, x ) ∧ R+ (x , y )] ⇒ ∃y ∈ ∆I [Z(y, y ) ∧ (6) (7) (8) (9) (10) I R+ (x, y)] if s∀∃Q = ± then I [Z(x, x ) ∧ ¬R− (x , y )] ⇒ ∃y ∈ ∆I [Z(y, y ) ∧ I ¬R− (x, y)], if O ∈ Φ then Z(x, x ) ⇒ (x = aI ⇔ x = aI ), if Q ∈ Φ then if Z(x, x ) holds and y1 , , yn (n ≥ 1) are pairI wise different elements of ∆I such that R+ (x, yi ) holds for every ≤ i ≤ n, then there exist pairwise different elements y1 , , yn of ∆I such that I R+ (x , yi ) and Z(yi , yi ) hold for every ≤ i ≤ n, if Q ∈ Φ and s∀∃Q = + then if Z(x, x ) holds and y1 , , yn (n ≥ 1) are pairI wise different elements of ∆I such that R+ (x , yi ) holds for every ≤ i ≤ n, then there exist pairwise different elements y1 , , yn of ∆I such I (x, yi ) and Z(yi , yi ) hold for every ≤ that R+ i ≤ n, if Q ∈ Φ and s∀∃Q = ± then if Z(x, x ) holds and y1 , , yn (n ≥ 1) are pairwise different I elements of ∆I such that ¬R− (x , yi ) holds for every ≤ i ≤ n, then there exist pairwise different elements y1 , , yn of ∆I such that I (x, yi ) and Z(yi , yi ) hold for every ≤ i ≤ n, ¬R− indiscernibility of objects in paraconsistent description logics It is very useful for concept learning in inconsistent knowledge base systems Definition 2: (Learning problem in paraconsistent description logics) Let I be a finite interpretation (given as a training information system), a knowledge base KB in a DL L and sets E + , E − of individuals, learn a concept C in L such that: The goal of learning is to find a correct concept with respect to the examples This can be seen as a search process in the space of concepts A natural idea is imposing an ordering on this search space and use models of KB and bisimulation in those models to guide the search for C The main idea of this method is to smooth the ∆ domain of the information system I using the selectors Based on that idea, the concept learning approach is broadly described as follows: Let Ap ∈ C be a concept name standing for the decision attribute and suppose that Ap can be expressed by a concept C in LΣ+ ,Φ+ , where Σ+ ⊆ Σ\Ap and Φ+ ⊆ Φ Let I be a training information system over How can we learn that concept C on the basis of I The trace of the learning algorithm in this case is as follows: if U ∈ Φ then (11) ∀x ∈ ∆I ∃x ∈ ∆I Z(x, x ) (12) ∀x ∈ ∆I ∃x ∈ ∆I Z(x, x ), 1) Starting from ∆I partition, we smooth this partition sequentially until we reach the partition corresponding to Ap This smoothing process can be stopped sooner when the current partition is consistent with E or satisfies certain conditions 2) In the process of smoothing ∆I partition, the blocks created at all steps are Y1 , Y2 , , Yn Each generated block is denoted by a new index by increasing the value of n For each ≤ i ≤ n, we set the following information: if Self ∈ Φ then (13) I I Z(x, x ) ⇒ [r+ (x, x) ⇒ r+ (x , x )] (14) I I Z(x, x ) ⇒ [r− (x, x) ⇒ r− (x , x )] KB |= C(a) for all a ∈ E + and a ∈ / E−, KB |= C(a) for all a ∈ E − and a ∈ / E+, + KB |= C(a) for all a ∈ E and a ∈ E − (1) (2) (3) As a consequence, if one of the above conditions holds and I, I are s-interpretations (Φ, s)-bisimilar to each other, then I |=s A iff I |=s A [27] • • IV C ONCEPT L EARNING FOR PARACONSISTENT D ESCRIPTION L OGICS • Yi is characterized by a concept Ci such that CiI = Yi , Record information about Yi is split by E, Saving the index of the largest block Yj such that Yi ⊆ Yj and Yj is not split by E 3) The current partition is denoted Y = {Yi1 , Yi2 , , Yik } ⊆ {Y1 , Y2 , , Yn } 4) When the current partition becomes consistent with Ap , return Ci1 Cij , where i1 ij are indices such that Yi1 Yij are all the blocks of the current partition Concept learning problem is similar to binary classification in traditional machine learning The difference is that in paraconsistent description logics, objects are described not only by attributes but also by binary relationships between objects As bisimulation is the notion for characterizing 433 TABLE I that are subsets of Ap T HE INFLUENCE OF THE INCONSISTENT PARAMETERS IN KNOWLEDGE SYSTEM V PRELIMINARY EVALUATION A The datasets Inconsistent (%) 25 30 35 40 45 50 55 60 We applied the proposed model to the set of electric devices We build several datasets including concepts, roles, and individuals We labeled and tested the datasets with different numbers of inconsistent concepts The device dataset contains electronic device information and attributes including information on 941 types of configurations (concepts), 32 links between objects (roles), and 521 objects (individuals) Each object in the dataset is expressed by a concept We use data from out of 627 subjects for training and validation Data for the other types of equipment is used for testing After some preprocessing steps on these datasets, i.e tested the inconsistent dataset on Protege Protege error when testing Reasoner with HermiT as shown in Fig Accuracy (%) 80.00 78.43 75.62 72.14 71.00 70.48 70.32 70.00 Precision (%) 66.67 63.54 60.00 56.00 52.48 48.23 44.00 40.00 Recall (%) 100 100 100 100 100 100 100 100 F1-Measure (%) 80.00 75.00 70.00 66.00 63.67 62.33 59.00 57.14 Table Since the inconsistent parameter acts as a termination criterion, we observe, as expected that lower inconsistent values lead to significant increases in accuracy Overall, the presented approach is able to learn accurate and inconsistent concepts with a reasonably low number of expensive reasoner requests Note that all the approaches are able to learn in a very expressive language with arbitrarily nested structures, as can be seen in the concept above Learning many levels of the structure has recently been identified as a key issue for structured Machine Learning and our work provides a clear advance on this front The results show that our approach is competitive with state-of-the-art Semantic Web systems when appearing inconisistent in a system VI CONCLUSIONS AND FUTURE WORKS Fig In this paper, a concept learning model for paraconsistent knowledge base system is introduced and discussed The key idea in this work is to use models of KB and bisimulation in those models to guide the search for C This mathematical technique, along with the partitioning strategies used, has been tested on two theoretical and experimental aspects This work can be extended along various directions We can extend the comparison with other methods like as refinement operators Through empirical evaluation of experiments, we examine the correlation between learning algorithms Test the inconsistent data set on Protege The reason it’s inconsistent data here is that of the plastic cover that the plastic is also on the screen Meanwhile, the cover and the screen are disjointed (the two layers are totally unrelated) to each other B Experimental results R EFERENCES We took several experiments with different rates of inconsistent data to evaluate the effect of the proposed algorithm In order to analyze the contribution of the labeled datasets, we also generated some subsets with the size of 25, 30, 35, 40, 45, 50, 55, 60 inconsistent data rates To illustrate the influence of the inconsistent parameter, we additionally measured ten-fold cross-validation accuracy, Recall, Precision, and F1-Measure The results are shown in [1] F Baader, D Calvanese, D McGuinness, D Nardi, and P PatelSchneider, editors Description Logic Handbook Cambridge University Press, 2002 [2] L Badea and S Nienhuys-Cheng Refining concepts in description logics In Proceedings of the 2000 International Workshop on Description Logics (DL2000), Aachen, Germany, August 17-19, 2000, pages 31–44, 2000 [3] N Belnap A useful four-valued logic In G Eptein and J Dunn, editors, Modern Uses of Many Valued Logic, pages 8–37 Reidel, 1977 434 [4] P Besnard and A Hunter Quasi-classical logic: Non-trivializable classical reasoning from incosistent information In Proc of ECSQARU’95, volume 946 of LNCS, pages 4451 Springer, 1995 [5] L Băuhmann, J Lehmann, and P Westphal Dl-learner - A framework for inductive learning on the semantic web J Web Sem., 39:15–24, 2016 [6] L Băuhmann, J Lehmann, P Westphal, and S Bin Dl-learner structured machine learning on semantic web data In Companion of the The Web Conference 2018 on The Web Conference 2018, WWW 2018, Lyon , France, April 23-27, 2018, pages 467–471, 2018 [7] W W Cohen and H Hirsh Learning the classic description logic: Theoretical and experimental results In Proceedings of the 4th International Conference on Principles of Knowledge Representation and Reasoning (KR’94) Bonn, Germany, May 24-27, 1994., pages 121–133, 1994 [8] N C A da Costa Erratum: ”on the theory of inconsistent formal systems” Notre Dame Journal of Formal Logic, 16(4):608 [9] A Divroodi, Q.-T Ha, L Nguyen, and H Nguyen On C-learnability in description logics In Proc of ICCCI’2012 (1), volume 7653 of LNCS, pages 230–238 Springer, 2012 [10] A R Divroodi, Q Ha, L A Nguyen, and H S Nguyen On the possibility of correct concept learning in description logics Vietnam J Computer Science, 5(1):3–14, 2018 [11] N Fanizzi, C d’Amato, and F Esposito DL-FOIL concept learning in description logics In Inductive Logic Programming, 18th International Conference, ILP 2008, Prague, Czech Republic, September 10-12, 2008, Proceedings, pages 107–121, 2008 [12] Q.-T Ha, T.-L.-G Hoang, L Nguyen, H Nguyen, A Szałas, and T.-L Tran A bisimulation-based method of concept learning for knowledge bases in description logics In Proc of SoICT2012, pages 241249 ACM, 2012 [13] K Hăoffner, S Walter, E Marx, R Usbeck, J Lehmann, and A N Ngomo Survey on challenges of question answering in the semantic web Semantic Web, 8(6):895–920, 2017 [14] I Horrocks, P Patel-Schneider, and F van Harmelen From SHIQ and RDF to OWL: The making of a web ontology language Journal of Web Semantics, 1(1):7–26, 2003 [15] A Hunter Paraconsistent logics In D Gabbay and P Smets, editors, Handbook of Defeasible Reasoning and Uncertain Information, pages 11–36 Kluwer, 1998 [16] A Hunter Reasoning with contradictory information using quasiclassical logic J Log Comput., 10(5):677–703, 2000 [17] L Iannone, I Palmisano, and N Fanizzi An algorithm based on counterfactuals for concept learning in the semantic web Appl Intell., 26(2):139–159, 2007 [18] B Konev, A Ozaki, and F Wolter A model for learning description logic ontologies based on exact learning In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 1008–1015, 2016 [19] P Lambrix and J Maleki Learning composite concepts in description logics: A first step In Foundations of Intelligent Systems, 9th International Symposium, ISMIS ’96, Zakopane, Poland, June 9-13, 1996, Proceedings, pages 68–77, 1996 [20] J Lehmann and P Hitzler Concept learning in description logics using refinement operators Machine Learning, 78(1-2):203–250, 2010 [21] Y Ma and P Hitzler Paraconsistent reasoning for OWL In A Polleres and T Swift, editors, Proc of Web Reasoning and Rule Systems, volume 5837 of LNCS, pages 197–211 Springer, 2009 [22] Y Ma, P Hitzler, and Z Lin Paraconsistent reasoning for expressive and tractable description logics In Proc of Description Logics, 2008 [23] J Maluszy´nski, A Szałas, and A Vit´oria Paraconsistent logic programs with four-valued rough sets In C.-C Chan, J Grzymala- [24] [25] [26] [27] [28] [29] [30] [31] 435 Busse, and W Ziarko, editors, Proc of RSCTC’2008, volume 5306 of LNAI, pages 41–51, 2008 R Mehri and V Haarslev Applying machine learning to enhance optimization techniques for OWL reasoning In Proceedings of the 30th International Workshop on Description Logics, Montpellier, France, July 18-21, 2017., 2017 L Nguyen and A Szałas Three-valued paraconsistent reasoning for Semantic Web agents In P J et al., editor, Proc of KES-AMSTA 2010, Part I, volume 6070 of LNAI, pages 152–162 Springer, 2010 L Nguyen and A Szałas Logic-based roughification In A Skowron and Z Suraj, editors, Rough Sets and Intelligent Systems (To the Memory of Professor Zdzisław Pawlak), Vol 1, pages 529–556 Springer, 2012 L A Nguyen, T H K Nguyen, N T Nguyen, and Q Ha Bisimilarity for paraconsistent description logics Journal of Intelligent and Fuzzy Systems, 32(2):1203–1215, 2017 N T Nguyen Inconsistency of knowledge and collective intelligence Cybernetics and Systems, 39(6):542–562, 2008 G Rizzo, N Fanizzi, J Lehmann, and L Băuhmann Integrating new refinement operators in terminological decision trees learning In Knowledge Engineering and Knowledge Management - 20th International Conference, EKAW 2016, Bologna, Italy, November 19-23, 2016, Proceedings, pages 511–526, 2016 T Tran, Q Ha, T Hoang, L Nguyen, and H Nguyen Bisimulationbased concept learning in description logics Fundam Inform., 133(23):287–303, 2014 Q B Vo, T H Tran, and T H K Nguyen On the use of surplus division to facilitate efficient negotiation in the presence of incomplete information In Knowledge-Based and Intelligent Information & Engineering Systems: Proceedings of the 20th International Conference KES-2016, York, UK, 5-7 September 2016., pages 295–304, 2016 ... I be a finite interpretation (given as a training information system), a knowledge base KB in a DL L and sets E + , E − of individuals, learn a concept C in L such that: The goal of learning is... assertions A( a) and ? ?A( a) for some A ∈ C and a ∈ I Paraconsistent reasoning is inconsistencytolerant and aims to derive meaningful logical consequences even when the knowledge base is inconsistent. .. using the traditional semantics, every query is a logical consequence of an inconsistent knowledge base A knowledge base may be inconsistent, for instance, when it contains both individual assertions

Ngày đăng: 24/03/2022, 10:12

Xem thêm: