ICKEPS
Competition date: June 25th, 2012
The Competition
Overview
The International Competition on Knowledge Engineering for Planning and Scheduling has been running since 2005 as a bi-annual event promoting the development and importance of the use of knowledge engineering methods and techniques within P&S.
Past events include 3 competitions (ICKEPS) interleaved with 3 workshops (KEPS), all held during ICAPS conferences: ICKEPS'05 (San Francisco, June 2005), ICKEPS'07 (Providence, September 2007), KEPS'08 (Sydney, September 2008), ICKEPS'09 (Thessaloniki, Greece, September 2009), KEPS'10 (Toronto, May 2010) and KEPS'11 (Freiburg, June 2011).
This year the competition aims at focusing on the Knowledge Engineering process “as a whole”: tools and techniques that encompass and support the main design phases of a planning domain model. We would like to encourage developers to present tools that support all or some of the following phases:
- Elicitation & Modeling
- Model Validation & Verification
- Plan/Schedule Analysis
Format
The competition has two main tracks:
- Design Process Track. This track aims at general and specific tools that support all or some of the three mentioned phases.
- General tools: Tools for the general track will be evaluated (in all phases or in a subset of them) for generic, domain-independent P&S. Competitors will have to demonstrate advantages of their tools in supporting KE for generic P&S approaches.
- Specific tools: Tools for the specific track will be evaluated in a specific applicative area. Competitors will have to demonstrate advantages of their tools in P&S for the specific area they apply and how they support the the design phases.
- Challenge Track. This track aims at providing competitors challenging Planning & Scheduling problems from industry partners so they can use their tools and creativity to come up with a design solution. Specifications of 3 challenging P&S problems are available (a set of scenarios is provided). Competitors must show their interest in solving at least one the available problems. Competitors will have to demonstrate the advantage of using their tools/method to produce a P&S model as a solution to the requirements (or a sub-set of) in the specification and the plans for the specified scenarios. Tools for the challenge track will be evaluated on the applicative domain they choose among those provided by industrial partners.
Design Process Track
Competitors (Accepted papers)
EUROPA: A Platform for AI Planning, Scheduling, Constraint Programming, and Optimization (pdf, slides)
Javier Barreiro, Matthew Boyce, Jeremy Frank, Michael Iatauro, Tatiana Kichkaylo, Paul Morris, Tristan Smith, Minh Do
FlowOpt: Bridging the Gap Between Optimization Technology and Manufacturing Planners (pdf, slides)
Roman Barták, Milan Jaška, Ladislav Novák, Vladimír Rovenský, Tomáš Skalický, Martin Cully, Con Sheahan, Dang Thanh-Tung
MARIO: Knowledge Engineering for Planning-based Data-flow Composition (pdf, slides)
Mark D. Feblowitz, Anand Ranganathan, Anton V. Riabov, Octavian Udrea
Challenge Track
Domains
We have gathered interesting and challenging P&S problems from industry partners. Competitor should model and solve these probelems using their tools. The following links provide the specification documets and the input data for each one of the available challenging problem. Here are the challanges for this year:
- SACE Domain: Planning Solar Array Operations on the International Space Station
- Petrobras Domain: Planning Ship Operations on Petroleum Platforms and Ports
- MEX Domain: Planning Operations on the Mars Express Mission
Competitors
- Team MATFYZ - Roman Barták, Jan Jelinek, Martin Kolombo, Martin Pecka, Martin Hanes, Daniel Toropila, Filip Dvorak, Ota Trunda
- Team APSI - Simone Fratini, Nicola Policella, Angelo Oddi, Riccardo Rasconi
Awards and Winner
This year we have the following awards and winner:- Winner of the Design Process Track: EUROPA: A Platform for AI Planning, Scheduling, Constraint Programming, and Optimization
Javier Barreiro, Matthew Boyce, Jeremy Frank, Michael Iatauro, Tatiana Kichkaylo, Paul Morris, Tristan Smith, Minh Do - Outstanding Performance on the Challenge Track: Team MATFYZ - Roman Barták, Jan Jelinek, Martin Kolombo, Martin Pecka, Martin Hanes, Daniel Toropila, Filip Dvorak, Ota Trunda
The winners were announced during the ICAPS conference on June 27th,2012. A slot was reserved at the ICAPS Demo Session to all competitors.
Proceedings
The ICKEPS 2012 proceedings will be available after the conferenceand will include the papers, problem specifications and the report.
Schedule
June 25th, 2012
9:30 - 11:00 |
Welcome & Introduction - Challenge Track
Oral Presentations - Challenge Track (Industry Partners)
ESA - Planning Operations on the Mars Express Mission
(20 min)
NASA - Planning Solar Array Operations on the International Space Station
(20 min)
Petrobras - Planning Ship Operations on Petroleum Platforms and Ports
(20 min)
Talk: Petrobras problem: Experiments with AI Planners (Tiago Vaquero)
(20 min) |
11:00 - 11:30 | Coffee Break |
11:30 - 12:30 |
Presentations - Challenge Track (Competitors) |
12:30 - 14:00 |
Lunch Break
|
14:00 - 15:00 |
Welcome & Introduction - Design Process Track
Oral and Interactive Presentations - Design Process Track
EUROPA: A Platform for AI Planning, Scheduling, Constraint Programming, and Optimization
Javier Barreiro, Matthew Boyce, Jeremy Frank, Michael Iatauro, Tatiana Kichkaylo, Paul Morris, Tristan Smith, Minh Do
(45 min)
|
15:00 - 15:30 | Coffee Break |
15:00 - 16:30 |
Oral and Interactive Presentations - Design Process Track
FlowOpt: Bridging the Gap Between Optimization Technology and Manufacturing Planners
Roman Barták, Milan Jaška, Ladislav Novák, Vladimír Rovenský, Tomáš Skalický, Martin Cully, Con Sheahan, Dang Thanh-Tung
(45 min)
Knowledge Engineering for Planning-based Data-flow Composition
Mark D. Feblowitz, Anand Ranganathan, Anton V. Riabov, Octavian Udrea
(45 min)
|
16:30 - 17:30 |
Discussion Panel - Ideas for ICKEPS 2014 |
June 27th, 2012
17:00 - 17:40 |
ICKEPS Wrap-up and winner announcement |
Evaluation criteria
The winners of the competition will be decided by a board of judges after a careful evaluation of both paper presentation and demonstration of the tools. In addition, partners will participate in the evaluation of the solutions presented in the Challenge Track. Both Software Engineering (SE) issues and P&S issues will be taken in account.
Examples of SE related criteria will be:
- Portability, meant as a measurement of the "difficulty of using the tool out of the laptop of the competitor" (Design Process track).
- Robustness, meant as a measurement of how much is the tool sensible to the domain in use (general tools) or how much is the tool sensible to the specific problem in the applicative are (specific tools).
- Usability, meant as a measurement of how much is the tool usable either by AI experts or target domain experts.
- Spread of use of the tool. (How many people so far have used the tool? To do what?)
- Perceived added value for the P&S community (general tools) or to the application area (specific tools). This is meant to be an evaluation of the impact of the tool.
- Flexibility. How would be easy to use the tool for domains out of those foreseen by the authors? (general tools) How demanding is to extend the set of problems that it is possible to cope with? (specific tools)
- Originality. Is the approach original (with respect to previous proposals by other authors)?
- Comprehensiveness. Is the sub-set of the domains/problems that can be used well defined? Is the chosen subset enough for targeting significant issues in the application community?
- The challenges involved in the covered phases.
- Connection with the P&S technology. Is the tool a comprehensive KE tool? Is there any available useful translation of output plan or schedule back into the application domain? Is there any feature available to interact with the planner/scheduler?
- Availability of solvers to input the translated domain model and performances of the planner and/or schedule (when available) with the translated domains.
The above criteria are meant only as general guidelines to drive the evaluation process. Given the high variability of competing tools, not all the above listed criteria will be necessarily taken into account for every tool, and further criteria might be considered whenever needed (in these cases the use or not use of different criteria will be explained). Given the practical infeasibility of giving an objective measure of each criterion, in practice the judges will express their point of view about them. The final result will be in any case justified by the sum of the evaluations taken for each criterion.
Only the winners and winners’ performances will be publicly discussed at the end of the competition (there will be no public ranking); however, the evaluations of the remaining participants will be available on the participant's request, and privately discussed.
Call for Participation
A pdf version of the call for participation can be found here.
Organizers
Tiago Stegun Vaquero (University of Toronto)
Contact: tvaquero (AT) mie.utoronto.ca
Simone Fratini (European Space Agency)
Contact: simone.fratini (AT) esa.int
Programme Committee
- Roman Barták, Charles University, Czech Republic
- Mark Boddy, Adventium Labs, USA
- Adi Botea, NICTA/ANU, Australia
- Luis Castillo, IActive Intelligent Solutions, Spain
- Amedeo Cesta, ISTC-CNR, Italy
- Bradley Clement, Jet Propulsion Laboratory, USA
- Stefan Edelkamp, Universität Dortmund, Germany
- Susana Fernández, Universidad Carlos III de Madrid, Spain
- Antonio Garrido, Universidad Politecnica de Valencia, Spain
- Robert Goldman, SIFT, USA
- Arturo González-Ferrer, University of Haifa, Israel
- Rania Hatzi, Harokopio University of Athens, Greece
- Lee McCluskey, University of Huddersfield, UK
- Nicola Policella, ESA, Germany
- Julie Porteous, University of Teeside, UK
Judges
- Tiago Stegun Vaquero, University of Toronto, Canada
- Simone Fratini, ESA, Germany
- Lee McCluskey, University of Huddersfield, UK
Previous competitions