Mais les résultats doivent être attendus longtemps et il n'y a généralement pas de temps doxycycline prix L'autre cas, c'est que l'achat d'un ou d'un autre antibiotique dans une pharmacie classique nécessite des dépenses matérielles considérables et pas toutes les personnes ne peuvent acheter des produits pharmaceutiques aussi coûteux.

Zanuttini.users.greyc.fr

Learning Conditional Preference Networks with Queries Université de Caen Basse-Normandie, France decision-maker should simply “fill the tables” by asking the We investigate the problem of eliciting CP-nets in user how her preference over the values of one node depends the well-known model of exact learning with equiv- on the values of its parents. Yet, in practice, eliciting pref- erences is far from easy because the dependency graph is identify a preference ordering with a binary-valued generally not known in advance: the decision-maker must CP-net by guiding the user through a sequence of therefore seek the interdependencies between attributes and queries. Each example is a dominance test on some identify a minimal set of parents for each target node. The pair of outcomes. In this setting, we show that problem is exacerbated still further by the fact that real-world acyclic CP-nets are not learnable with equivalence applications typically involve many irrelevant attributes. For queries alone, while they are learnable with the help instance, it is not uncommon in recommender systems to of membership queries if the supplied examples describe products using thousands of variables, with the are restricted to swaps. A similar property holds expectation that only a small fraction of these are crucial for tree CP-nets with arbitrary examples. In fact, for describing preferences [Basilico and Hofmann, 2004; membership queries allow us to provide attribute- Ziegler et al., 2008]. The decision-maker is thus required efficient algorithms for which the query complex- to select, within a large collection of attributes, a relatively ity is only logarithmic in the number of attributes.
small subset over which the network will be specified.
Such results highlight the utility of this model foreliciting CP-nets in large multi-attribute domains.
Such considerations bring into sharp focus the need for query learning algorithms that aim at extracting CP-nets byguiding the user through an appropriate sequence of queries.
A widely adopted framework for studying this issue is the The spectrum of AI applications that resort on the abil- model of exact learning with equivalence and membership ity to reason about preferences is extremely wide, ranging queries [Angluin, 1988]. In essence, equivalence queries sim- from configuration softwares and recommender systems to ulate a form of passive learning in which the decision-maker autonomous agents and group decision-making. As many, if observes the user’s behavior until she finds a counterexam- not most, of these applications are defined over large, multiat- ple to her hypothesis. By contrast, membership queries cap- tribute domains, a key challenge in preference research is to ture a form of active learning by allowing the decision-maker develop representation languages and elicitation techniques to ask about examples of her own choice. The utility of this that cope with the exponential size of the outcome space.
model lies in the fact that rich concept classes, including Horn Conditional preference networks (CP-nets), have emerged theories, decision trees, and some description logics, have as an expressive language capable of representing ordinal been shown to be learnable with both equivalence queries preferences relations in a compact and structured manner and membership queries, while in weaker versions one can [Boutilier et al., 2004]. Briefly, a CP-net is a graph where prove superpolynomial lower bounds [Angluin et al., 1992; each node is labelled with a table describing the user’s prefer- Bshouty, 1995; Frazier and Pitt, 1996].
ence over alternative values of this node given different values In the learning model suggested by this study, the target of the parent nodes. For example, the entry Jb ∧ Pb : Sr concept is a dominance ordering on the outcome space. Each might state that, all other things being equal, I prefer a red example is a preference situation involving some pair of out- shirt than a black one if the color for both the jacket and the comes. For a membership query, the learner supplies an ex- pants is black. The semantics of a CP-net is defined by a ample (o, o ) and is told whether o dominates o , or not. For dominance ordering on the outcome space, derived from such an equivalence query, the learner presents a CP-net N , and reading of entries in the tables. Based on this relation, a key either is told that N correctly identifies the target concept, or reasoning task is dominance testing: given a CP-net N and a it is given a counterexample (o, o ). The goal is to identify pair of outcomes (o, o ), determine whether o dominates o , the target concept using as few resources as possible, where according to the dominance ordering induced by N .
resources refer both to the run time and the number of queries.
From a practical perspective, one must take into account the fact that outcomes are typically not comparable with an equivalent cost. As observed in [Green and Srinivasan, 1978], users can meaningfully compare outcomes if they differ only on very few attributes. Similarly, for the learner, this task can be arduous because dominance testing is generally NP-hard, even for acyclic CP-nets. Thus, our learnability results are defined in terms of a concept class in which the target concept is chosen, and an instance class that circumscribes the set of examples used by equivalence and membership queries.
The key message to be gleaned from this paper is that active learning is required to correctly and efficiently ex-tract preference networks in binary-valued domains. On the Figure 1: A CP-net and its preference graph for “evening dress” one hand, acyclic CP-nets are not learnable with equivalencequeries alone, while on the other, they are learnable with An outcome is a maximal term o for X. Given a term t, we equivalence and membership queries, provided that the in- write o[t] for the outcome obtained by making o agree with stance class is restricted to simple outcome pairs for which t, i.e. o[t] = {p : p ∈ t or p ∈ o − t}. For example, if dominance testing takes linear time. Interestingly, a simi- o = x1x2x3 and t = x1x2, then o[t] = x1x2x3. An outcome lar property holds for tree-structured CP-nets by extending o satisfies a term t if o = o[t]. We write 0 (resp. 1) for the the instance class to arbitrary examples. When membership outcome that assigns 0 (resp. 1) to every variable in X.
queries are available, we provide attribute-efficient learning The ceteris paribus semantics of a CP-rule t : p algorithms for which the query complexity is linear in the a variable x can be described as follows: if an outcome o size of the minimal CP-net that identifies the target concept, satisfies the condition t and assigns the value p to x, then it and logarithmic in the total number of attributes. Such en- is preferred to an outcome o which differ from o only in that couraging results pave the way for fast elicitation techniques is assigns the value p to x. In formal terms, we say that o is capable of extracting “small” CP-nets in “large” domains.
this case, (o, o ) is called a model of the rule t : p Given a CP-net N and two outcomes o and o , we say that o dominates o for N if there is a sequence (o1, · · · , om) such The learning problems under consideration in this study are that o1 = o , om = o and for each i : 1 ≤ i < m, (oi+1, oi) defined over a set of Boolean variables X = {x1, · · · , xn}.
is a model of some CP-rule of N . In this case, (o, o ) is called As usual, we refer to xi and xi as literals. Given a literal p, a model of N , and (o1, · · · , om) an improving sequence from we denote by p the opposite of p; for example, if pi is xi, then o to o. The dominance ordering of N , denoted pi is xi. A term t is a conjunction of literals. By var (t) we set of all models of N . For example, the dominance ordering denote the set of variables occurring in t. A term t is maximal of the CP-net for evening dress is the transitive closure of the for a subset of variables Y ⊆ X if var (t) = Y .
digraph displayed in the right part of Figure 1.
A conditional preference rule (CP-rule) on a variable x is A CP-net N is consistent if there is no outcome o which and t is a term such that x ∈ var (t). Such a rule captures the a strict partial order over the outcome space. As reported in statement “given that t holds, the value p is preferred to the [Boutilier et al., 2004], any acyclic CP-net is consistent.
value p for the variable x, all other things being equal”.
Given two CP-nets N and N , we say that N subsumes N A conditional preference table (CP-table) on a variable x with respect to a set Y ⊆ X \ {x} is a set of CP-rules on x in N such that t ⊆ t. A CP-net N is minimal if there is no distinct net N subsumed by N and for which term t for Y . The CP-table is complete if exactly one rule example, the CP-net in Figure 1 is minimal.
p is associated to each maximal term t for Y .
Finally, the size |r| of a CP-rule r will be the number of A conditional preference net (CP-net) is a labelled digraph occurrences of literals in its definition. Specifically, if r is of N over a subset var (N ) of X, where each node x ∈ var (N ) p, then |r| = |t|+2. The size |N | of a CP-net is annotated with a CP-table cpt (x) on x with respect to the N will be the sum of |r| over all rules r in N . For example, set Pa(x) of parents of x in the graph. The CP-net is complete the size of the CP-net in Figure 1 is 2 + 2 + 4 × (2 + 2) = 20.
if any node x ∈ var (N ) is annotated with a complete CP-table. A CP-net is acyclic if its digraph is acyclic, and tree- structured if its digraph forms a forest, that is, a set of trees.
These notions are illustrated in the left part of Figure 1, The learning criterion expected in this study is that of exact where an acyclic complete CP-net is specified for the popular identification, which is achieved if the learner can infer a CP- evening dress scenario. I unconditionally prefer black (, p) net that correctly represents the target concept. A concept is a to white (, p) as the color of both the jacket and the pants, while my preference for a red shirt (s) versus a white one (s) tion class is a set N of CP-nets. A concept depends on the combination of jacket and pants.
by N if there is a CP-net N ∈ N such that concept class CN over N is the set of all concepts that are Two outcomes form a swap if they differ in the value of representable by N . The description size of a concept only one variable. Such examples correspond to simple sit- CN will be the minimum of |N | over all representations N of uations of the form “for this car I prefer it red than white”, in N . Finally, an instance or example is a pair (o, o ) of where the color is one of the multiple attributes of the car.
outcomes, and an instance class is a set O of examples.
The next property states that, in acyclic CP-nets, swaps can be a target concept of some concept class CN , and be retrieved in linear time by simple rule matching.
O be an instance class. The learner may extract informationabout using two types of queries. A membership query Lemma 1. Let N be an acyclic CP-net and (o, o ) be a swap.
MQ over O takes an example (o, o ) ∈ O and returns Yes N o if and only if there is a CP-rule r in N such o , and No otherwise. An equivalence query EQ over O takes a CP-net N ∈ N , and returns Yes if N is arepresentation of , or returns a counterexample (o, o ) ∈ O Proof. The if direction is immediate. Conversely, if o otherwise. The counterexample (o, o ) is positive if o then there is an improving sequence from o to o in N . As- sume w.l.o.g. that o satisfies x1 and o = o[x1]. Using the suf- fix fixing rule [Boutilier et al., 2004, Section 5.1], we can also Definition 1. An algorithm A is a query learning algorithm assume that the sequence affects only x1 and its ascendants in N . By acyclicity, one of those ascendants x if there are two polynomials p and q such that, for any tar- but has none of its parents modified in the sequence. Thus, N , after p(n) membership and equivalence k cannot be modified back to its value in o , which entails queries over O, and total running time in q(n), A outputs a xk = x1. So only x1 is modified, which concludes.
A CP-net N is a minimal representation of a concept number of queries p(n) is polynomial in the description size , but only polylogarithmic in the number of variables n.
Given an instance class O, we say that a concept class representation of any concept is unique.
CN is attribute-efficiently learnable if there is an attribute-efficient query learning algorithm for CN with respect to O.
Clearly, the strength of query-directed learning lies in satisfying the following: for any variables x and y, and membership queries, which model not only the interaction any rule r ∈ N on x, if y is a parent of x in N , then there with a user, but also the careful crafting of experiments by a is a model (o, o ) of r such that exactly one of (o[y], o [y]) decision-maker in order to observe the response of the user.
and (o[y], o [y]) is not a model of r. This amounts to say In order to demonstrate that a class of CP-nets is not learnable that y is a relevant parent of x. Clearly, such a representation with equivalence queries alone, we shall use the technique of exists: take any representation and remove all irrelevant par- approximate fingerprints introduced in [Angluin, 1990].
Intuitively, a concept class CN has approximate finger- N subsumes N . Let r ∈ N be a rule of the form t : p Any swap (o, o ) for which o = o[t ∧ p] and o = o[p] is a N in CN supplied by the learner, the user can choose a coun- terexample for N that eliminates only a superpolynomially a model of some rule r in N of the form t : p small fraction of candidate concepts in C∗ t , then there is a variable y ∈ var (t) \ var (t ) such that process, the learner cannot be certain of the target concept in (o[y], o [y]) and (o[y], o [y]) are both models of r , but exactly C∗N after only polynomially many equivalence queries.
one of these pairs is not a model of r, contradicting the fact A pair (o, o ) of outcomes is called an α-fingerprint of a We now show that acyclic and complete CP-nets have approximate fingerprints, even if examples are restricted to swaps. Thus, by applying Theorem 1 in [Angluin, 1990], they are not learnable with equivalence queries alone.
with respect to an instance class O if for any polynomial p(n),C approximate fingerprints with respect to swap examples.
N contains at least two concepts, and for all concepts CN of description size bounded by p(n), there is an example be the class of all concepts represented by a CP-net N ∗ with log n root nodes xj pointing to the samefixed child node x The table cpt (x1) includes one rule s : x1 Acyclic CP-nets take a central part in preference research by the conjunction of all positive literals in Pa(x1), and n − 1 providing the right level of expressivity for many real-world x1, where s is any maximal term of Pa(x1) applications, while remaining tractable for certain reasoning with at least one negative literal. Clearly N ∗ is acyclic and tasks such as outcome optimization [Domshlak et al., 2001; complete. Furthermore, |C∗ | = n−1 . Now, let p(n) = nk Boutilier et al., 2004]. We begin with some useful properties.
The fingerprint (o, o ) is defined as follows. If N does not in C∗ . So (o, o ) is an α-fingerprint of C∗ Now, if N includes a rule of the form t : x 1, which can be constructed as follows.
Because N is complete, the size of its table on x than 2|t|. Since |N | ≤ nk we get |t| ≤ k log n. Thus, o can be constructed by satisfying t first and filling the rest as nec- Taking logarithms, this proportion is less than 1 if and only if n−1 > k2k, which is true for sufficiently large n.
Procedure SEARCHPARENT(xi, pi, o, o , a, b) When membership queries are available, we can provide an attribute-efficient algorithm for learning acyclic, and pos- sibly incomplete, CP-nets, provided that the supplied exam- oj ← o[tj] where tj are the first j literals occurring in o ples are restricted to swaps. As specified in Algorithm 1, the learner iteratively updates her hypothesis N by asking equiva- lence queries. On seeing a counterexample (o, o ), the learner checks whether N includes a rule that covers either (o , o) or (o, o ). If this is indeed the case, she asks membership queriesin order to refine the condition of that rule. Otherwise, she ex-pands her net with a new rule. Here, each rule r of the form Proof. Initially, N = ∅, so the property holds. Now, con- pi is associated with an outcome or, called the sup- sider an iteration of the main loop and suppose by induction port of r, and such that (or[pi], or[pi]) is a model of r.
hypothesis (I.H.) that N is subsumed by N ∗ before calling The key routine SEARCHPARENT finds a new parent of EQ. If EQ returns a positive counterexample (o, o ), then some misclassifying rule r, using only a logarithmic number by Lemma 1 N ∗ includes a rule t∗ : pi of membership queries. Using the support or of r and the last counterexample (o, o ), it operates a binary search on the se- pi where t is the projection of o onto the parent set quence (o1, · · · , on) where oj is formed by the first j literals Pa(xi) of xi in N . Since (by I.H.) N is subsumed by N ∗, we occurring in or and the last n − j literals occurring in o. As have Pa(xi) ⊆ Pa∗(xi), where Pa∗ is the parent set in N ∗.
shown in the lemma below, SEARCHPARENT is guaranteed Therefore t ⊆ t∗, and hence, the invariant is preserved.
to find a new parent in the rule r, by maintaining the invari- Dually, if EQ returns a negative counterexample (o, o ), ant that for each explored subsequence (oa, · · · , ob), we have then by Lemma 1 N includes a rule r of the form t : pi support or of r we also have or = or[t ∧ pi], and or So by Lemma 3, SEARCHPARENT returns a parent xj of xi comes, and pi, pi be a pair of opposite literals for some vari- in N ∗. Now, let r be any rule of the form t : p there is a parent xj of xi in the minimal representation N ∗ of i in N ∗ such that t ⊆ t∗ and or satisfies whose value is different in oa and ob.
t∗. This together with the fact that xj is a parent of xi in N ∗ensures t ∪ {pj} ⊆ t∗, so the invariant is preserved.
Proof. Consider the outcome sequence (o0, · · · , on) whereoj = oa[tj] with tj the conjunction of the first j literals in Now let us examine the complexity of Algorithm 1. For ob. Since o0 = oa and on = ob, there is some j > 0 such that equivalence queries, each counterexample allows us to find a pi or a new literal pj of some rule in the pi in N ∗ such that oj satisfies t but oj−1 does not.
minimal representation N ∗ of the target concept. Because the Since they differ only on xj, we get xj ∈ var (t).
hypothesis N is always subsumed by N ∗, this can happen atmost |N ∗| times. For membership queries, at most log n of these are used in each call of SEARCHPARENT, which always Algorithm 1 maintains an acyclic CP-net N which is always uncovers a new parent of some variable. So the number of subsumed by the minimal representation N ∗ of these queries is at most e log n, where e = from equivalence and membership queries over swaps: any can be identified in polynomial time, using at most |N ∗| + 1 equivalence queries and e log n member- /∗ The counterexample is necessarily positive ∗/ ship queries, where N ∗ is the minimal representation of and e is the number of edges in N ∗.
Binary-valued tree-structured CP-nets constitute a restricted, yet important, class of preference networks for which dom- inance testing on arbitrary pairs of outcomes is solvable in quadratic time using a backtrack-free search technique [Boutilier et al., 2004]. It is therefore legitimate to study the learnability issues of this class in the general setting where the examples supplied to the learner are arbitrary preference parents ← {SEARCHPARENT(x, p, 1, 0, 0, n)} situations. In this context, we first show that tree-structured CP-nets are not learnable with equivalence queries alone.
parents ← {SEARCHPARENT(x, p, 0, 1, 0, n)} fingerprints with respect to arbitrary outcome pairs.
add to N a table for x with parents parents and the rulessatisfied by the prefo,p’s Proof. We assume w.l.o.g. that n is even to avoid floors and ceilings. To each permutation π of (x1, · · · , xn) we associatethe smallest set of rules Nπ such that xπ(1) and for each i > 1, Nπ includes xπ(i−1) : xπ(i) an improving sequence from o to o. Thus, on seeing a coun- be the class of all concepts represented by some N terexample (o, o ), the learner computes the tables for each such variable. Because any variable has at most one parent, its table can be found using few membership queries.
The fingerprint (o, o ) is defined as follows. Assume first From a practical perspective, note that the examples used that there is an outcome o1 containing at least n/2 ones and in MQ’s are restricted to swaps in order to minimize the cog- such that (o1, 0) is a model of N . Then there is an improving nitive effort spent by the user in comparing outcomes.2 sequence from 0 to o1 in N , and since variables are flippedone by one, it must contain an outcome o with exactly n/2 Lemma 5. Let N ∗ be a minimal representation of the target We claim that (o, o ) is an 1 -fingerprint of C∗ ables x in var (N ∗), and always extracts the table of x in N ∗.
only if the first n/2 variables according to π are exactly those Proof. By construction, when PROPAGATE(xi) is called by assigned 1 by o. Otherwise, any improving sequence in Nπ should flip at least one variable assigned 0 by both o and o , x ∈ var (N ∗), because otherwise it would have no table in with no way back, a contradiction. It follows that there are N ∗ and hence, its value could not change along any improv- ing sequence from o to o. Now given x ∈ var (N ∗), we show that PROPAGATE computes the right set of parents for x.
First, let MQ(0[p], 0[p]) = MQ(1[p], 1[p]) = Yes. Then and 1 satisfy t. Hence t is empty. Now to the second case.
which is clearly less than 1 for sufficiently large n. Now, Here MQ(0[p], 0[p]) = Yes and MQ(1[p], 1[p]) = No, so assume that there is no o1 as above. Let o = 1 and o = 0.
by Lemma 3 there is a parent y of x in N ∗, which is found by SEARCHPARENT. The third case is symmetric. In the last Hence, (o, o ) is an α-fingerprint of C∗ case, all queries answer No, so there is no rule on x in N ∗, and hence, x has no parent in N ∗.
As further evidence for the utility of membership queries, Consequently, in all cases PROPAGATE computes the right we now give an attribute-efficient algorithm for eliciting tree set of (at most one) parents. Because each possible rule is CP-nets in presence of arbitrary examples. Let validated by one of the queries MQ(o[p], o[p]), the table com- the set of all variables whose value differ in two outcomes o puted for x is the correct one. Furthermore, since each recur- and o . Algorithm 2 uses the fact that considering only vari- sive call of PROPAGATE is on a variable y which is the parent ables in ∆(o, o ) and their ascendants is enough for searching of some variable in var (N ∗), we have y ∈ var (N ∗).
1For a tree CP-net N, |N| ≤ 6|var(N)| (two rules of size 3 per 2In addition, the outcomes 1 and 0 used in PROPAGATE can be variable), so if p(n) ≥ 6n, N can be any tree CP-net.
replaced by any suitable pair (o, o) for which o = {p : p ∈ o}.
By Lemma 5, it follows that all counterexamples supplied A direction of research that naturally emerges from our to the learner are positive. Moreover, from the structure of study is to extend the learnability results to larger classes of the algorithm it is easily seen that after treating (o, o ), the preference networks. Algorithm 1 can provide a starting point hypothesis N contains the correct tables for all ascendants of for attacking multi-valued acyclic CP-nets. Similarly, Algo- (o, o ). This together with the suffix fixing rithm 2 might be extended to directed-path singly-connected principle [Boutilier et al., 2004, Section 5.1] ensures that N CP-nets. Less obvious, however, is to efficiently learn cyclic o , and so that the algorithm converges.
CP-nets even with swap examples. Finally, the problem of Concerning the complexity of our learning algorithm, the revising CP-nets with queries looks challenging.
number of equivalence queries is at most |var (N ∗)| + 1, be-cause each counterexample allows the learner to treat at least one new variable in N ∗. Likewise, the routine PROPAGATE comments. This work was supported by the ANR project treats each variable in var (N ∗) exactly once, using at most log n + 4 membership queries for computing the pref o,p’splus searching for a parent. Finally, the hypothesis main- tained by the learner is always a subtree of N ∗, and hence, dominance testing can be evaluated in quadratic time.
Angluin et al., 1992] D. Angluin, M. Frazier, and L. Pitt.
Learning conjunctions of Horn clauses. Machine Learn- from equivalence and membership queries over arbitrary out- [Angluin, 1988] D. Angluin. Queries and concept learning.
Machine Learning, 2(4):319–342, 1988.
polynomial time using at most k + 1 equivalence queries and [Angluin, 1990] D. Angluin. Negative results for equiva- k log n + 4k membership queries, where k is the number of lence queries. Machine Learning, 5:121–150, 1990.
[Athienitou and Dimopoulos, 2007] F. Athienitou and Y. Di- mopoulos. Learning CP-networks: A preliminary investi-gation. In Adv. in Preference Handling (M-PREF), 2007.
Along the lines of making query-directed learning applicableto preference elicitation, we have provided a model for learn- [Basilico and Hofmann, 2004] J. Basilico and T. Hofmann.
ing preference networks from equivalence and membership Unifying collaborative and content-based filtering.
queries, together with significant learnability results. Taking into account the cognitive effort required by human users to [Boutilier et al., 2004] C. Boutilier, R. Brafman, C. Domsh- answer queries, our model is distinguished by the close way lak, H. Hoos, and D. Poole. CP-nets: A tool for represent- in which it integrates learning and dominance testing, and the ing and reasoning with conditional ceteris paribus prefer- insistence on having convergence bounds that are polynomial ence statements. J. Artif. Intell. Res., 21:135–191, 2004.
in the minimal description size of the target concept, but only [Bshouty, 1995] N. Bshouty. Exact learning boolean func- polylogarithmic in the total number of attributes. In essence, tions via the monotone theory. Information and Computa- our results reveal that membership queries are essential for extracting both acyclic CP-nets from restricted outcome pairs, and tree CP-nets from arbitrary outcome pairs. Importantly, Domshlak et al., 2001] C. Domshlak, R. Brafman, and the examples used by these queries can be limited to “swaps” S. Shimony. Preference-based configuration of web page in order to facilitate their comparison.
content. In IJCAI’01, pages 1451–1456, 2001.
To the best of our knowledge, this work provides the first [Frazier and Pitt, 1996] M. Frazier and L. Pitt. Classic learn- connection between active learning and graphical preference ing. Machine Learning, 25(2-3):151–193, 1996.
languages. Some authors, though, have recently focused on [Green and Srinivasan, 1978] P. Green and V. Srinivasan.
passive learning, where the goal is to extract a CP-net from a Conjoint analysis in consumer research: Issues and out- set of examples [Athienitou and Dimopoulos, 2007]. Yet, in look. J. Consumer Research, 5(2):103–123, 1978.
this “offline” passive learning model, the problem of finding [Lang and Mengin, 2009] J. Lang and J. Mengin. The com- an acyclic CP-net consistent with a set of arbitrary outcome plexity of learning separable ceteris paribus preferences.
pairs is NP-hard, even if the dependence graph is known in In IJCAI’09 (these proceedings), 2009.
advance [Lang and Mengin, 2009]. Note that in the “online”passive learning model, acyclic CP-nets are not predictable [Littlestone, 1988] N. Littlestone. Learning quickly when ir- with a polynomial mistake bound, even if examples are re- relevant attributes abound: A new linear-threshold algo- stricted to swaps; this is a direct corollary of Theorem 1, that rithm. Machine Learning, 2(4):285–318, 1988.
follows from a standard conversion between online learning [Sachdev, 2007] M. Sachdev. On learning of ceteris paribus and exact learning with EQ’s alone [Littlestone, 1988].
preference theories. Master’s thesis, Graduate Faculty of Perhaps the closest framework to ours is due to Sachdev North Carolina State University, 2007.
[2007], who investigates some preference logics in different [Ziegler et al., 2008] C. Ziegler, G. Lausen, and J. Konstan.
learning models. Although encouraging, his results do not On exploiting classification taxonomies in recommender take into account the cost of dominance testing, and the query systems. AI Communications, 21(2-3):97–125, 2008.
complexity grows exponentially with the size of rules.

Source: https://zanuttini.users.greyc.fr/data/articles/kz09.pdf

Bd20100518103430

ACTOS ADMINISTRATIVOS DE CARÁCTER GENERAL - Acciones conducentes según su vinculación con actos administrativos concretos, operaciones administrativas o vías de hecho / EXCEPCION DE INCONSTITUCIONALIDAD - Procede en acción de nulidad y restablecimiento / EXCEPCION DE LEGALIDAD - Procede en acción de nulidad y restablecimiento / INTERPRETACION DE LA DEMANDA - Acumulación de pretensione

Muscle damage

Muscle Damage from Interactions Between Statins and Other Commonly Prescribed Drugs Worst Pills Best Pills Newsletter article July, 2009 Some of the widely used class of cholesterol-lowering drugs called statins may causerhabdomyolysis (severe, sometimes fatal, muscle damage) when taken with many othercommonly used drugs, such as clopidogrel (PLAVIX) and trimethoprim and sulfamethoxazo

Copyright ©2010-2018 Medical Science