NORDIC JOURNAL OF COMPUTING VOLUME 14 NUMBER 1-2 SUMMER 2007 Parosh Aziz Abdulla, Johann Deneux, Pritha Mahata, Aletta Nylén Using Forward Reachability Analysis for Verification of Timed Petri Nets 1 Abstract We consider verification of safety properties for concurrent real-timed systems modelled as timed Petri nets by performing symbolic forward reachability analysis. We introduce a formalism, called region generators, for representing sets of markings of timed Petri nets. Region generators characterize downward closed sets of regions and provide exact abstractions of sets of reachable states with respect to safety properties. We show that the standard operations needed for performing symbolic reachability analysis are computable for region generators. Since forward reachability analysis is necessarily incomplete, we introduce an acceleration technique to make the procedure terminate more often on practical examples. We have implemented a prototype for analyzing timed Petri nets and used it to verify a parameterized version of Fischer's protocol, Lynch and Shavit's mutual exclusion protocol and a producer-consumer protocol. We also used the tool to extract finite-state abstractions of these protocols. ACM CCS Categories and Subject Descriptors: F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs Key words: timed petri nets, model checking, reachability analysis, downward closed languages Vitus S.W. Lam A Formalism for Reasoning about UML Activity Diagrams 43 Abstract The major problem of UML activity diagrams is the lack of a rigorous approach for verifying the correctness of a model. In this paper, we examine how activity diagrams defined in UML 2.0 standard are formally analyzed using NuSMV model checker. A model represented as activity diagrams is first transformed into NuSMV input language and then verified that a set of system specifications is satisfied using NuSMV. ACM CCS Categories and Subject Descriptors: D.2.1 [Software Engineering]: Requirements / Specifications - Tools; D.2.4 [Software Engineering]: Software / Program Verification - Model checking; D.2.10 [Software Engineering]: Design - Methodologies; H.4.1 [Information Systems Applications]: Office Automation - Workflow management Key words: UML activity diagrams, NuSMV, UML 2.0, model checking, workflow management Ka Lok Man Formal Specification and Analysis of Hardware Systems in Timed Chi 65 Abstract Timed Chi (\chi) is a timed process algebra, designed for modelling, simulation, verification and real-time control. Its application domain consists of large and complex manufacturing systems. The straightforward syntax and semantics is also highly suited to architects, engineers and researchers from the hardware design community. Since timed Chi is a well-developed algebraic theory from the field of process algebras with timing, we have the idea that timed Chi is also well-suited for addressing various aspects of hardware systems (discrete-time systems by nature). To show that timed Chi is useful for formal specification and analysis of hardware systems and our idea is correct, we illustrate the use of timed Chi with some benchmark examples of hardware systems: a multiplexer (MUX), a D flip-flop, an asynchronous arbiter and a simple arbiter (with assertion). ACM CCS Categories and Subject Descriptors: F.4.3 [Mathematical Logic and Formal Languages]: Formal Languages; B.7.2 [Integrated Circuits]: Design Aids Key words: formal languages, formal semantics, process algebras, real-time systems, formal specification and analysis, hardware systems Pinar Heggernes, Dieter Kratsch Linear-Time Certifying Recognition Algorithms and Forbidden Induced Subgraphs 87 Abstract We give the first linear-time certifying algorithms to recognize split graphs, threshold graphs, chain graphs, co-chain graphs and trivially perfect graphs, with sublinear certificates for negative output. In case of membership in a given graph class our algorithms provide as certificate a structure for the input graph that characterizes the class, and in case of non-membership they provide as certificate a forbidden induced subgraph of the class. The certificates of membership can be authenticated in time O(n+m) and the certificates of non-membership can be authenticated in time O(1). ACM CCS Categories and Subject Descriptors: G.2.2 [Discrete Mathematics]: Graph Theory; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems; B.8.1 [Performance and Reliability]: Reliability, Testing, and Fault-Tolerance Key words: algorithms and data structures, graph algorithms, recognition of graph classes, certifying algorithms Anat Bremler-Barr, Leah Epstein Path Layout on Tree Networks: Bounds in Different Label Switching Models 109 Abstract Path Layout is a fundamental graph problem in label switching protocols. This problem is raised in various protocols such as the traditional ATM protocol and MPLS which is a new label switching protocol standardized recently by the IETF. Path layout is essentially the problem of reducing the size of the label table in a router. The size is equivalent to the number of different paths that pass through the router, or start from it. A reduction in the size can be achieved by choosing a relatively small number of paths, from which a larger set is composed using concatenation. In this paper we deal with three variants of the Path Layout Problem according to the special characteristics of paths in three label switching protocols, MPLS, ATM and TRAINET. We focus on tree networks, and show an algorithm which finds label tables of small size, while permitting concatenation of at most $k$ paths. We prove that this algorithm gives worst case tight bounds (up to constant factor) for all three models. The bounds are given as a function of the size of the tree, and the maximum degree. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems ("Routing and layout"); G.2.2 [Discrete Mathematics]: Graph Theory ("Network problems") Key words: tree networks, label switch paths, asynchronous transfer mode (ATM), multi protocol label switching (MPLS) Meena Mahajan, Raghavan Rama, S. Vijayakumar Block Sorting: A Characterization and Some Heuristics 126 Abstract Given a permutation \pi, the block sorting problem is to find a shortest series of block moves which, when applied in succession, sorts \pi. Here a block is a maximal substring of successive integers in order, and a block move is the displacement of a block to a location where it merges with another block. Block sorting is an NP-hard optimization problem and has a factor 2 approximation algorithm. In this paper, we present a combinatorial characterization of optimal solutions of block sorting problem and use it to prove various computationally important properties of the problem. In particular, we identify certain block moves that are provably optimal. We also establish the equivalence of block sorting and a combinatorial puzzle. We consider several polynomial-time heuristics for block sorting that are inspired either by the above-mentioned combinatorial characterization, or by the approach that was based on the block merging problem, or both. Although these heuristics seem to be promising candidates for improving the approximation ratio (their approximation ratios are provably at most 2), we show that none of them leads to a better approximation ratio than 2. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems Key words: block sorting, approximation, heuristics, combinatorial characterization VOLUME 14 NUMBER 3 FALL 2008 Mark Cieliebak, Stephan Eidenbenz, Aris T. Pagourtzis, Konrad Schlude On the Complexity of Variations of Equal Sum Subsets 151 Abstract The Equal Sum Subsets problem, where we are given a set of positive integers and we ask for two nonempty disjoint subsets such that their elements add up to the same total, is known to be NP-hard. In this paper we give (pseudo-)polynomial algorithms and/or (strong) NP-hardness proofs for several natural variations of Equal Sum Subsets. Among others we present (i) a framework for obtaining NP-hardness proofs and pseudo-polynomial time algorithms for Equal Sum Subsets variations, which we apply to variants of the problem with additional selection restrictions, (ii) a proof of NP-hardness and a pseudo-polynomial time algorithm for the case where we ask for two subsets such that the ratio of their sums is some fixed rational r > 0, (iii) a pseudo-polynomial time algorithm for finding k subsets of equal sum, with k = O(1), and a proof of strong NP-hardness for the same problem with k = \Omega(n), (iv) algorithms and hardness results for finding k equal sum subsets under the additional requirement that the subsets should be of equal cardinality. Our results are a step towards determining the dividing lines between polynomial time solvability, pseudo-polynomial time solvability, and strong NP-completeness of subset-sum related problems. ACM CCS Categories and Subject Descriptors: F.2 [Analysis of Algorithms and Problem Complexity] Key words: equal sum subsets, partition, knapsack problems, strong NP-completeness, pseudo-polynomial algorithms Qiwen Xu, Naijun Zhan Formalising Scheduling Theories in Duration Calculus 173 Abstract Traditionally many proofs in real time scheduling theory were informal and lacked the rigor usually required for good mathematical proofs. Some attempts have been made towards making the proofs more reliable, including using formal logics to specify scheduling algorithms and verify their properties. In particular, Duration Calculus, a real time interval temporal logic, has been used since timing requirements in scheduling can be naturally associated with intervals. This paper aims to improve the work in this area and give a summary. Static and dynamic priority scheduling algorithms are formalised in Duration Calculus and classical theorems for schedulability analysis are proven using the formal proof system of Duration Calculus. ACM CCS Categories and Subject Descriptors: C.3 [Special-Purpose and Application-Based Systems]: Real-time and embedded systems; D.2.4 [Software/Program Verification]: Formal methods Key words: real time scheduling, schedulability conditions, temporal logics, duration calculus, formal proof Evangelos Kranakis, Danny Krizanc and Sunil Shende Tracking Mobile Users in Cellular Networks Using Timing Information 202 Abstract We consider the problem of tracking a mobile user moving through a cellular network using node queries that provide information as to last time the user visited the node. Queries are executed in rounds, each round may involve queries to many nodes simultaneously, and the user may move at the same time that queries are performed. The cost measures considered are the number of rounds required to find the user and the total number of queries made during the execution of the algorithm. We present a number of algorithms for general networks, as well as upper and lower bounds for specific network topologies. ACM CCS Categories and Subject Descriptors: F.2.3 [Analysis of Algorithms and Problem Complexity]: Tradeoffs between Complexity Measures Key words: cellular networks, query, round, timing information, tracking Y. Boichut, P.-C. Héam and O. Kouchnarenko Approximation-Based Tree Regular Model-Checking 216 Abstract This paper addresses the following general problem of tree regular model-checking: decide whether R*(L) \cap L_p = \emptyset where R* is the reflexive and transitive closure of a successor relation induced by a term rewriting system R, and L and L_p are both regular tree languages. We develop an automatic approximation-based technique to handle this - undecidable in general - problem in most practical cases, extending a recent work by Feuillade, Genet and Viet Triem Tong. We also make this approach fully automatic for practical validation of security protocols. ACM CCS Categories and Subject Descriptors: D.2.4 [Software Engineering]: Software/Program Verification; F.4.2 [Mathematical Logic and Formal Languages]: Grammars and Other Rewriting Systems Key words: verification, model-checking, regular languages, security protocols VOLUME 14 NUMBER 4 WINTER 2008 SELECTED PAPERS OF THE NORDIC WORKSHOP ON MODEL-DRIVEN ENGINEERING (NW-MoDE), REYKJAVIK, AUGUST 2008: Ebba Hvannberg Guest Editor's Foreword 243 Cyril Carrez, Lotte Johansen, Pawel Cieslak and Stefan Hänsgen Service Engineering with the SIMS Design and Validation Tools 245 Abstract As service engineering using modelling languages like UML increases, tool support for design and validation becomes important. However, few UML tools support validation of service behaviour. The SIMS project has developed a complete tool chain to support service engineering, with validation opportunities built in the model. This paper presents this tool suite, which consists of three tightly integrated parts, each one developed as plug-ins to the Eclipse platform: a UML editor tailored for service engineering, a service validator called Ramses for behaviour validation, and a tool to link the UML model with an ontology -- Artefact Wizard. The tools are such that service developers follow quite smoothly the service development process used in SIMS. ACM CCS Categories and Subject Descriptors: D.2.1 [Requirements/Specifications]; D.2.2 [Design Tools and Techniques]; D.2.4 [Software/Program Verification]; D.2.13 [Re\-usable Software] Key words: service design, UML editors, verification, ontologies Samuel Lahtinen and Kai Koskimies A Model-Based Approach to Reflective Command Interfaces 264 Abstract Reflective command interfaces allow the changing of the command language by using the language itself. We argue that reflective command languages can be useful in many applications that allow the introduction of new types of concepts to be used in the application. We propose an approach to support reflective command languages through metamodeling, so that a reflective command interface can be automatically generated on the basis of a conceptual model of the application, given as an instance of a metamodel. A prototype implementation of the required infrastructure has been developed and used as a proof-of-concept, applied for a small example application. ACM CCS Categories and Subject Descriptors: H.5.2 [Information interfaces and presentation]: User interfaces; D.2.1 [Software Engineering]: Requirements/Specifications; D.3.2 [Programming Languages]: Language Classifications Key words: model-based user interfaces, speech interfaces, reflective commands Kenneth Lind and Rogardt Heldal Estimation of Real-Time Software Component Size 282 Abstract For distributed networks which will be mass-produced, such as computer systems in modern vehicles, it is crucial to develop cost-efficient hardware. A distributed network in a vehicle can consist of 100 ECUs (Electronic Control Unit). In this paper we consider the amount of memory needed for these ECUs. They should contain enough memory to survive several software generations, without inducing unnecessary cost of too much memory. We show that UML Component Diagrams can be used to collect enough information for estimating memory size using an FSM method. We develop two linear models capable of estimating memory size for two common types of software components before the software is available. We support our findings by two experiments containing several software components from the automotive industry. ACM CCS Categories and Subject Descriptors: D.2.8 [Software Engineering]: Metrics - Product metrics; C.3 [Computer Systems Organization]: Special-Purpose and Application-Based Systems - Real-time and embedded systems Key words: Functional Size Measurement, COSMIC Function Points, UML components, system architecture, software code size Parastoo Mohagheghi, Vegard Dehlen and Tor Neple A Metamodel and Supporting Process and Tool for Specifying Quality Models in Model-Based Software Development 301 Abstract Modelling is applied increasingly more in software development; from developing sketches to blueprints of design and approaches that use models in all phases of software development such as the model-driven engineering approach. Consequently, developers need tools and techniques that allow them to reflect upon the quality of the models, as well as the environment used for developing these models such as modelling languages, modelling processes and tools. This article describes work on developing quality models in model-based software development by identifying stakeholders and their purposes of modelling, specifying quality goals based on these purposes, identifying means or practices required to achieve quality goals and selecting proper evaluation methods. These are steps in developing quality models that include rationale for selecting quality goals and is supported by a process, a metamodel and a tool developed in Eclipse. The contributions of the approach are firstly providing a framework for developing quality models that is tailored to model-based software development and secondly providing example quality models that may be reused by different projects, thus facilitating work on quality issues in software development. ACM CCS Categories and Subject Descriptors: D.2.8 [Software Engineering]: Metrics - Product metrics; D.2.6 [Software Engineering]: Programming Environments - Graphical environments; D.2.2 [Software Engineering]: Design Tools and Techniques - Computer-aided software engineering (CASE); D.2.9 [Software Engineering]: Management - Software quality assurance (SQA) Key words: quality model, modelling, model-driven engineering, metrics, metamodel Mikko Raatikainen, Varvana Myllärniemi and Tomi Männistö. Featback: Method for Enhancing Management of Agile Development. 321 Abstract Agile methods have gained popularity in the software industry. A backlog, which is, roughly, a prioritized list of things to be done, is used as a means of management of development. However, one particular challenge is that a backlog does not provide information beyond isolated backlog items, e.g., especially that of composed entities, such as the product in the whole. We describe the Featback method, which provides information that the existing means do not directly provide, especially of composed entities. The information is based on analyses using the concepts from an integrated backlog and feature model, thus integrating structural and developmental information. Justification is provided by the definition specifications for analyses and by using an expository instantiation of a running example for the Featback method. ACM CCS Categories and Subject Descriptors: K.6.3 [Management of Computing and Information Systems]: Software Management - Software development Key words: agile development, backlog, feature modeling, management Outi Räihä, Kai Koskimies, Erkki Mäkinen and Tarja Systä Pattern-Based Genetic Model Refinements in MDA 338 Abstract We explore the application of genetic algorithms in model transformations that can be understood as pattern-based refinements. In MDA (Model Driven Architecture), such transformations can be exploited for deriving a PIM model from a CIM model. The approach uses design patterns as the basis of mutations and exploits various quality metrics for deriving a fitness function. A genetic representation of models and their transformations is proposed, and the genetic transformation process is studied experimentally. The results suggest that genetic algorithms provide a feasible vehicle for model transformations, leading to convergent and reasonably fast transformation process. However, more work is needed to improve the quality of the individual models produced by the technique. ACM CCS Categories and Subject Descriptors: D.2.11 [Software architectures]; D.2.2 [Design tools and techniques] model transformation, genetic algorithm, model driven architecture, software design Author Index Volume 14 (2007-2008) 356 VOLUME 13 NUMBER 1-2 SUMMER 2006 SELECTED PAPERS OF THE 17TH NORDIC WORKSHOP ON PROGRAMMING THEORY (NWPT'05), OCTOBER 19--21, 2005: Neil D. Jones Guest Editor's Foreword 1 Maksym Bortin, Einar Broch Johnsen and Christoph Lüth Structured Formal Development in Isabelle 2 Abstract General purpose theorem provers provide advanced facilities for proving properties about specifications, and may therefore be a valuable tool in formal program development. However, these provers generally lack many of the useful structuring mechanisms found in functional programming or specification languages. This paper presents a constructive approach to adding theory morphisms and parametrisation to theorem provers, while preserving the proof support and consistency of the prover. The approach is implemented in Isabelle and illustrated by examples of an algorithm design rule and of the modular development of computational effects for imperative language features based on monads. ACM CCS Categories and Subject Descriptors: F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs; F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic; I.2.2 [Artificial Intelligence]: Automatic Programming Key words: program development, transformation, theorem provers, monads Ingo Brückner, Björn Metzler and Heike Wehrheim Optimizing Slicing of Formal Specifications by Deductive Verification 22 Abstract Slicing is a technique for extracting parts of programs or specifications with respect to certain criteria of interest. The extraction is carried out in such a way that properties as described by the slicing criterion are preserved, i.e., they hold in the complete program if and only if they hold in the sliced program. During verification, slicing is often employed to reduce the state space of specifications to a size tractable by a model checker. The computation of specification slices relies on the construction of dependence graphs, reflecting (at least) control and data dependencies in specifications. The more dependencies the graph has, the less removal of parts is possible. In this paper we present a technique for optimizing the construction of the dependence graph by using deductive verification techniques. More precisely, we propose a technique for showing that certain control dependencies in the graph can be eliminated. The technique employs small deductive proofs of the enabledness of certain transitions. Thereby we obtain dependence graphs with less control dependencies and as a consequence smaller specification slices which are an easier target for model checking. ACM CCS Categories and Subject Descriptors: D.2 [Software Engineering]; D.2.1 [Software Engineering]: Requirements/Specification; D.2.4 [Software Engineering]: Software/Program Verification Key words: program slicing, integrated formal specifications, exact reduction, deductive verification, model checking Tristan Crolard, Samuel Lacas and Pierre Valarcher On the Expressive Power of the Loop Language 46 Abstract We define a translation of Meyer and Ritchie's Loop language into a subsystem of Gödel's system T (with product types). Then we show that this translation actually provides a lock-step simulation if a call-by-value strategy is assumed for system T. Some generalizations and possible applications are discussed. ACM CCS Categories and Subject Descriptors: F.1.3 [Computation by Abstract Devices]: Complexity Measures and Classes -- machine-independent complexity; F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages -- operational semantics Key words: loop language, primitive recursion, lambda-calculus, implicit computational complexity Troels C. Damgaard and Lars Birkedal Axiomatizing Binding Bigraphs 58 Abstract We axiomatize the congruence relation for binding bigraphs and prove that the generated theory is complete. In doing so, we define a normal form for binding bigraphs, and prove that it is unique up to certain isomorphisms. Our work builds on Milner's axioms for pure bigraphs. We have extended the set of axioms with five new axioms concerned with binding, and we have altered some of Milner's axioms for ions, because ions in binding bigraphs have names on both their inner and outer faces. The resulting theory is a conservative extension of Milner's for pure bigraphs. ACM CCS Categories and Subject Descriptors: D.3.1 [Program Languages]: Formal Definitions and Theory; F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages -- process models Key words: graphical models of computation, bigraphs, axioms for static congruence Monica Nesi and Giustina Nocera Deriving the Type Flaw Attacks in the Otway-Rees Protocol by Rewriting 78 Abstract This paper presents an approach to formalizing and verifying security protocol specifications based on rewriting techniques. A rewrite system R_P describes the steps of a protocol and the properties under consideration, and a rewrite system R_I defines an intruder's ability of decomposing and decrypting messages. The equational theory generated by R = R_P U R_I characterizes the recognizability of terms by an intruder, i.e. how an intruder can learn (parts of) messages exchanged among principals communicating over an insecure network. Security properties, such as authentication and secrecy, can be expressed by means of intruder's recognizability, and verifying whether a term is recognized by an intruder reduces to checking whether such a term can be rewritten to a normal form in the intruder's initial knowledge. A rewriting strategy is defined that, given a term t that represents a property to be proved, suitably expands and reduces t using the rules in R to derive whether or not t is recognized by an intruder. The approach is applied on the Otway-Rees symmetric-key protocol by deriving its well-known type flaw attacks. ACM CCS Categories and Subject Descriptors: C.2.2 [Computer-Communication Networks]: Network Protocols -- protocol verification; F.4.2 [Mathematical Logic and Formal Languages]: Grammars and Other Rewriting Systems Key words: security protocols, formalization, verification, term rewriting Christoffer Rosenkilde Nielsen and Hanne Riis Nielson Static Analysis for Blinding 98 Abstract The classical key distribution protocols are based on symmetric and asymmetric encryption as well as digital signatures. Protocols with different purposes often requires different cryptographic primitives, an example is electronic voting protocols which are often based on the cryptographic operation blinding. In this paper we study the theoretical foundations for one of the successful approaches to validating cryptographic protocols and we extend it to handle the blinding primitive. Our static analysis approach is based on Flow Logic; this gives us a clean separation between the specification of the analysis and its realisation in an automatic tool. We concentrate on the former in the present paper and provide the semantic foundation for our analysis of protocols using blinding - also in the presence of malicious attackers. ACM CCS Categories and Subject Descriptors: F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs; F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages -- program analysis; F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic -- lambda calculus and related systems Key words: static analysis, blinding, protocols, process algebras, LySa Kristian Støvring Higher-Order Beta Matching with Solutions in Long Beta-Eta Normal Form 117 Abstract Higher-order matching is a special case of unification of simply-typed lambda-terms: in a matching equation, one of the two sides contains no unification variables. Loader has recently shown that higher-order matching up to beta equivalence is undecidable, but decidability of higher-order matching up to beta-eta equivalence is a long-standing open problem. We show that higher-order matching up to beta-eta equivalence is decidable if and only if a restricted form of higher-order matching up to beta equivalence is decidable: the restriction is that solutions must be in long beta-eta normal form. ACM CCS Categories and Subject Descriptors: F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic -- lambda calculus and related systems Key words: higher-order matching, higher-order beta matching, long beta-eta normal form Leonidas Tsiopoulos and Marina Waldén Formal Development of NoC Systems in B 127 Abstract When developing complex Network-on-Chip (NoC) systems we need to ensure that they satisfy their functional requirements. This can be achieved by developing the systems in a structured way using a formal method with tool support. We use the B Action Systems formalism for this purpose. We give a general formal framework for the development of NoC systems avoiding architectures with complex controllers and complex arbiter modules for deciding the routing path of a data packet. The development is performed in a stepwise manner composing more advanced routing components out of simpler units. ACM CCS Categories and Subject Descriptors: C.2.1 [Computer-Communication Networks]: Network Architecture and Design; D.2.4 [Software Engineering]: Software/Program Verification; F.3.1 [Logics and Meanings of Program]: Specifying and Verifying and Reasoning about Programs; C.3 [Special-Purpose and Application-Based Systems] Key words: B Method, Action Systems, Network-on-Chip, formal verification, parallel composition VOLUME 13 NUMBER 3 FALL 2006 SELECTED PAPERS OF THE FOURTH NORDIC WORKSHOP ON UML AND SOFTWARE MODELLING (NWUML'06), JUNE 12--14, 2006: Andreas Prinz and Merete Skjelten Tveit Guest Editors' Foreword 147 Pauli Byckling, Petri Gerdt, Ludwik Kuzniarz and Jorma Sajaniemi Increasing Comprehensibility of Object Models: Making the Roles of Attributes Explicit in UML Diagrams 149 Abstract UML allows to produce and maintain object-oriented models of systems from different perspectives and view points using the notion of a diagram. This paper describes how the comprehensibility of models can be increased by augmenting some UML diagrams with additional information about the behavior of attributes. A recent notion of ``roles of variables'' is used to describe various behaviors. The increased comprehensibility of two diagram types, class diagrams and sequence diagrams, is demonstrated by examples where the added role information reveals the behavior of attributes in a compact manner. ACM CCS Categories and Subject Descriptors: D.2.2 [Software Engineering]: Design Tools and Techniques -- object-oriented design methods; D.2.3 [Software Engineering]: Coding Tools and Techniques -- object-oriented programming; F.3.3 [Logics and Meanings of Programs]: Studies of Program Constructs -- object-oriented constructs Key words: behavior of attributes, roles of variables, UML, comprehensibility Johannes Koskinen, Anna Ruokonen and Tarja Systä A Pattern-Based Approach to Generate Code from API Usage Scenarios 162 Abstract The usage of an API of a software system requires learning and knowledge about the API itself. In tutorials, an API usage is typically described by giving a couple of example applications and some example code fragments. While such examples are useful and illustrative for the application developer, it is often difficult for her to separate application-specific parts from the API-specific ones, and further to distinguish optional and mandatory parts. In this paper we discuss an approach and tool support for guiding the application developer in API usage in a step-by-step fashion. As the examples are often sequential by their nature, we use UML sequence diagrams to describe possible API usage scenarios. We propose pattern-based tool support to exploit such sequence diagrams and to generate the application code in a semi-automated way. ACM CCS Categories and Subject Descriptors: D.2.2 [Software Engineering]: Design Tools and Techniques -- computer-aided software engineering; D.2.11 [Software Engineering]: Software Architectures -- domain-specific architectures; D.2.6 [Software Engineering]: Programming Environments -- interactive environments Key words: pattern, code generation, UML, tool support Mika Siikarla, Jari Peltonen and Johannes Koskinen Towards Unambiguous Model Fragments 180 Abstract In model based design model fragments are used in everyday work. Concurrent operations on separate parts of a model and communication between stakeholders are some examples. However, the use of model fragments is not always conscious, partly because it is typically not covered by the design process. The concept of model fragment is practically undefined, which leads to adopting loose, ad hoc definitions. Thus, communication using model fragments is prone to misunderstandings and creating and using them is fuzzy. This prevents developing of systematic methods and tools. In this paper we clarify the concept and the problem area of model fragments, and discuss how to create and use them. We also present a proof-of-concept implementation for a model fragmentation mechanism and demonstrate it in a small example. ACM CCS Categories and Subject Descriptors: D.2.2 [Software Engineering]: Design Tools and Techniques -- computer-aided software engineering (CASE); D.2.11 [Software Engineering]: Software Architectures -- data abstraction Key words: modeling, model fragments, UML André L. Santos, Kai Koskimies and Antónia Lopes A Model-Driven Approach to Variability Management in Product-Line Engineering 196 Abstract Object-oriented frameworks play an essential role in the implementation of product-line architectures (PLAs) for product families. However, recent case studies reveal that deriving products from the shared software assets of a product-line is a time-consuming and expensive activity. In this paper, we present a model-driven approach for product derivation in framework-based product-lines. This approach aims to alleviate the aforementioned problems by bridging the gap between domain and application engineering activities in product-line-based development. Our approach is centered on a layered model embracing different artifacts ranging from models for conceptual and architectural variability to models for the realization of variation points. The approach provides mechanisms for enforcing the variation rules throughout the product derivation process and for documenting the variability issues in frameworks and the derived applications. We demonstrate the approach using an existing Java GUI framework. ACM CCS Categories and Subject Descriptors: D.2.2 [Software Engineering]: Design Tools and Techniques -- computer-aided software engineering (CASE), object-oriented design methods; D.2.13 [Software Engineering]: Reusable Software -- domain engineering, reuse models; D.2.11 [Software Engineering]: Software Architectures -- domain-specific architectures Key words: product-lines, frameworks, variability management, model-driven engineering VOLUME 13 NUMBER 4 WINTER 2006 Assefaw Hadish Gebremedhin, Mohamed Essaïdi, Isabelle Guérin Lassous, Jens Gustedt and Jan Arne Telle PRO: A Model for the Design and Analysis of Efficient and Scalable Parallel Algorithms 215 Abstract We present a new parallel computation model called the Parallel Resource-Optimal computation model. PRO is a framework being proposed to enable the design of efficient and scalable parallel algorithms in an architecture-independent manner, and to simplify the analysis of such algorithms. A focus on three key features distinguishes PRO from existing parallel computation models. First, the design and analysis of a parallel algorithm in the PRO model is performed relative to the time and space complexity of a sequential reference algorithm. Second, a PRO algorithm is required to be both time- and space-optimal relative to the reference sequential algorithm. Third, the quality of a PRO algorithm is measured by the maximum number of processors that can be employed while optimality is maintained. Inspired by the Bulk Synchronous Parallel model, an algorithm in the PRO model is organized as a sequence of supersteps. Each superstep consists of distinct computation and communication phases, but the supersteps are not required to be separated by synchronization barriers. Both computation and communication costs are accounted for in the runtime analysis of a PRO algorithm. Experimental results on parallel algorithms designed using the PRO model---and implemented using its accompanying programming environment SSCRAP---demonstrate that the model indeed delivers efficient and scalable implementations on a wide range of platforms. ACM CCS Categories and Subject Descriptors: F.1.2 [Computation by Abstract Devices]: Models of Computation; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems Key words: parallel computers, parallel computation models, parallel algorithms, complexity analysis Alexandru Berlea Online Evaluation of Regular Tree Queries 240 Abstract Regular tree queries (RTQs) are a class of queries considered especially relevant for the expressiveness and evaluation of XML query languages. The algorithms proposed so far for evaluating queries online, while scanning the input data rather than by explicitly building the tree representation of the input beforehand, only cover restricted subsets of RTQs. In contrast, we introduce here an efficient algorithm for the online evaluation of unrestricted RTQs. We prove our algorithm is optimal in the sense that it finds matches at the earliest possible time for the query and the input document at hand. The time complexity of the algorithm is quadratic in the input size in the worst case and linear in many practical cases. Preliminary experimental evaluation of our practical implementation are very encouraging. ACM CCS Categories and Subject Descriptors: H.3 [Information Storage and Retrieval]; I.7 [Document and Text Processing] Key words: XML, querying, XML streams, event-based processing, tree automata, XPath, XQuery Rajiv Kumar Poddar and Purandar Bhaduri Verification of Giotto based Embedded Control Systems 266 Abstract An implementation of a control system design may not preserve the functional and timing requirements of the application. Our goal is to verify that an implementation meets the high-level timing and functional specifications of a control application. We take Giotto as the implementation model, and verify Giotto models using UPPAAL, a tool box for modelling, simulation and verification of timed automata. We present a translation scheme for building timed automata in UPPAAL for real-time systems written in Giotto. When translating Giotto to timed automata, we consider timing constraints imposed by the control application, as well as the characteristics of the implementation platform. These timing constraints take into account execution times of atomic tasks, worst case execution times, worst case communication times and jitters. The timed models obtained in this manner are analysed and the corresponding system's functional and timing properties are verified using UPPAAL. We develop the translation scheme in two phases. The first is applicable to basic Giotto models; the latter considers Giotto models with annotations providing information on scheduling and resource allocation. We demonstrate both phases of the scheme by applying it to two Giotto models -- an elevator control and a hovercraft control system. The two systems vary in their complexity, their functional and non-functional requirements. We report on the results of our verification of the Giotto models. ACM CCS Categories and Subject Descriptors: D.2.4 [Software Engineering]: Software/Program Verification -- model checking; I.6.4 [Simulation and Modeling]: Model Validation and Analysis; I.6.5 [Simulation and Modeling]: Model Development -- modeling methodologies; C.3 [Special-Purpose and Application Based Systems] -- real-time and embedded systems Key words: embedded control systems, real-time verification, guided model translation Prosenjit Gupta Range-Aggregate Query Problems Involving Geometric Aggregation Operations 294 Abstract We consider variations of the standard orthogonal range searching motivated by applications in database querying and VLSI layout processing. In a generic instance of such a problem, called a range-aggregate query problem we wish to preprocess a set S of geometric objects such that given a query orthogonal range q, a certain intersection or proximity query on the objects of S intersected by q can be answered efficiently. Although range-aggregate queries have been widely investigated in the past for aggregation functions like average, count, min, max, sum etc. we consider aggregation operations involving geometric intersection searching problems in this paper. Efficient solutions are provided for point enclosure queries for d >= 1, 1-d interval intersection, 2-d orthogonal segment intersection and interval/rectangle/hyper-rectangle enclosure queries in this framework. ACM CCS Categories and Subject Descriptors: E.1 [Data Structures]; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems Key words: data structures, computational geometry, range-aggregate queries K. Subramani and John Argentieri Chain Programming over Difference Constraints 309 Abstract Chain Programming is a restricted form of Linear Programming with a number of interesting properties. A Chain Program is characterized by a total ordering on the program variables (of a linear program). In other words, the constraints x_1 <= x_2 ... x_n are either implicitly or explicitly part of the constraint system defined by a linear program. At the present juncture, it is not clear whether an arbitrary linear program augmented with a chain is easier to solve than linear programs in general, either asymptotically or computationally; however, it is definitely the case that Chain Programs possess a number of useful properties. In this paper, we restrict ourselves to linear programs that are constituted entirely of difference constraints. For such linear programs (also called Difference Constraint Systems), we show that the total ordering of the program variables results in an elegant divide and conquer algorithm for the problem of feasibility testing. This approach can be parallelized in straightforward fashion to run on any SIMD or MIMD architecture, thereby greatly enhancing its effectiveness. Inasmuch as difference constraint logic is an integral part of a number of verification problems in both Model-checking and real-time scheduling, our result is of particular importance to these communities. Secondly, difference constraint systems are duals of shortest path problems in directed graphs and hence our study is important from the perspectives of the Artificial Intelligence and Operations Research communities as well. One of the surprising consequences of our research is the establishment of a link between Chain Programs over Difference Constraints (CPD) and a specialized class of Totally Unimodular (TUM) matrices called interval matrices. This connection makes it likely that there exists a more efficient algorithm for the problem of feasibility testing in CPDs. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems; G.2.1 [Discrete Mathematics]: Combinatorics; F.4.3 [Mathematical Logic and Formal Languages]: Formal Languages Key words: Chain Programs, Linear Programming, Divide and Conquer, Difference Constraints, Negative cost cycles Daniel Lemire Streaming Maximum-Minimum Filter Using No More than Three Comparisons per Element 328 Abstract The running maximum-minimum (MAX-MIN) filter computes the maxima and minima over running windows of size w. This filter has numerous applications in signal processing and time series analysis. We present an easy-to-implement online algorithm requiring no more than 3 comparisons per element, in the worst case. Comparatively, no algorithm is known to compute the running maximum (or minimum) filter in 1.5 comparisons per element, in the worst case. Our algorithm has reduced latency and memory usage. ACM CCS Categories and Subject Descriptors: F.2.1 [Analysis of Algorithms and Problem Complexity]: Numerical Algorithms and Problems Key words: design of algorithms, data streams, time series, latency, monotonicity NOTE: Michel Schellekens, Rachit Agarwal, Emanuel Popovici and Ka Lok Man. A Simplified Derivation of Timing Complexity Lower Bounds for Sorting by Comparisons 340 Abstract We present a simplified derivation of the fact that the complexity-theoretic lower bound of comparison-based sorting algorithms, both for the worst-case and for the average-case time measure, is \Omega(nlogn). The standard proofs typically are directly presented over decision trees. The proof for the average-case however relies on differential calculus, which presents a main hurdle in undergraduate presentations. Here we present a simplified derivation of this result based on the well-known Kraft inequality for binary trees. This inequality enables one to derive the worst-case lower bound via a very simple argument. It also yields the average-case lower bound, via a similar argument as for the worst-case and also involves an elegant and simple inequality obtained by the Danish mathematician Jensen. The Jensen inequality essentially implies that, for convex functions, the function value of a mean of numbers is bounded by the mean of the function values of these numbers. This approach removes the need to present the results based on differential calculus and makes the material easily accessible for undergraduate Computer Science students. ACM CCS Categories and Subject Descriptors: F.2 [Analysis of Algorithms and Problem Complexity] Key words: comparison-based algorithms, sorting, lower bounds, complexity Author Index Volume 13 (2006) 347 VOLUME 12 NUMBER 1 SPRING 2005 Danny Krizanc, Pat Morin and Michiel Smid Range Mode and Range Median Queries on Lists and Trees 1 Abstract We consider algorithms for preprocessing labelled lists and trees so that, for any two nodes u and v we can answer queries of the form: What is the mode or median label in the sequence of labels on the path from u to v. ACM CCS Categories and Subject Descriptors: E.1 [Data Structures]; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems Key words: range search, range statistics, range mode, range median Wan Fokkink, Jaap-Henk Hoepman and Jun Pang A Note on K-State Self-stabilization in a Ring with K=N 18 Abstract We show that, contrary to common belief, Dijkstra's K-state mutual exclusion algorithm on a ring also stabilizes when the number K of states per process is one less than the number N+1 of proces ses in the ring. We formalize the algorithm and verify the proof in PVS, based on Qadeer and Shankar's work. We show that K=N is sharp by giving a counter-example for K=N-1. ACM CCS Categories and Subject Descriptors: C.2.2 [Computer-Communication Networks]: Network Protocols; C.2.4 [Computer-Communication Networks]: Distributed Systems; D.2.4 [Software Engineering]: Software/Program Verification; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs Key words: distributed computing, fault tolerance, self-stabilization, mutual exclusion, formal verification, PVS Amr Elmasry Deterministic Jumplists 27 Abstract We give a deterministic version of the randomized jumplists recently introduced by Brönnimann et al. [2003]. The new structure supports searches in worst-case logarithmic time, insertions and deletions in amortized logarithmic time (versus expected logarithmic time for the randomized version), and the successor of an element can be accessed in worst-case constant time (similar to the randomized version). As for the randomized version, our deterministic counterpart does not involve duplicate indexing information or extra pointers to access the successor, in contrast with other known dictionary structures. A Jumplist is basically an ordered linked list with an extra jump pointer per node to speed up the search, using a total of two pointers and two integers per node. All dictionary operations are implemented in an easy, efficient and compact way. ACM CCS Categories and Subject Descriptors: E.1 [Data Structures] -- lists, stacks, and queues; E.2 [Data Storage Representations] -- linked representations; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems -- sorting and searching Key words: data structures, dictionaries, weight-balanced trees, linked lists, amortized analysis Veli Mäkinen and Gonzalo Navarro Succinct Suffix Arrays based on Run-Length Encoding 40 Abstract A succinct full-text self-index is a data structure built on a text T=t_1t_2...t_n, which takes little space (ideally close to that of the compressed text), permits efficient search for the occurrences of a pattern P=p_1p_2...p_m in T, and is able to reproduce any text substring, so the self-index replaces the text. Several remarkable self-indexes have been developed in recent years. Many of those take space proportional to nH_0 or nH_k bits, where H_k is the kth order empirical entropy of T. The time to count how many times does P occur in T ranges from O(m) to O(m log n). In this paper we present a new self-index, called RLFM index for ``run-length FM-index'', that counts the occurrences of P in T in O(m) time when the alphabet size is \sigma=O(\polylog(n)). The RLFM index requires nH_k log\sigma+O(n) bits of space, for any k <= \alpha log_\sigma n and constant 0<\alpha<1. Previous indexes that achieve O(m) counting time either require more than nH_0 bits of space or require that \sigma=O(1). We also show that the RLFM index can be enhanced to locate occurrences in the text and display text substrings in time independent of \sigma. In addition, we prove a close relationship between the kth order entropy of the text and some regularities that show up in their suffix arrays and in the Burrows-Wheeler transform of T. This relationship is of independent interest and permits bounding the space occupancy of the RLFM index, as well as that of other existing compressed indexes. Finally, we present some practical considerations in order to implement the RLFM index. We empirically compare our index against the best existing implementations and show that it is practical and competitive against those. In passing, we obtain a competitive implementation of an existing theoretical proposal that can be seen as a simplified RLFM index, and explore other practical ideas such as Huffman-shaped wavelet trees. ACM CCS Categories and Subject Descriptors: E.1 [Data structures]; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems -- computations on discrete structures, pattern matching, sorting and searching; H.3 [Information Storage and Retrieval] Key words: compressed full-text self-indexes, empirical k-th order entropy, indexed string matching, text retrieval, Burrows-Wheeler transform, suffix arrays, run-length compression VOLUME 12 NUMBER 2 SUMMER 2005 SELECTED PAPERS OF THE SIXTEENTH NORDIC WORKSHOP ON PROGRAMMING THEORY (NWPT'04), OCTOBER 6--8, 2004: Paul Pettersson and Wang Yi Guest Editors' Foreword 67 Juhan Ernits Memory Arbiter Synthesis and Verification for a Radar Memory Interface Card 68 Abstract The target system of this paper is a radar memory interface card described in the IST AMETIST project. We present a way to synthesise and verify a memory arbiter for the interface card by specifying two different problems of logic model checking. In the process, we minimise the amount of memory used for intermediate buffering of data streams by augmenting the model with cost variables and applying a guided model checker --- Uppaal CORA. It is verified that the resultant arbiter does not deadlock and never starves nor overflows any of the buffers. The model is constructed as an abstraction of the behaviour of the system from the point of view of memory communication. The key factors to the success of the synthesis task are the rather simple abstract model and the application of bit-state hashing for speeding up reachability. It is suggested that a method of sweeping over a range of hash table sizes for to enhance the performance of reachability would a practical improvement for solving synthesis tasks. ACM CCS Categories and Subject Descriptors: B.8.2 [Performance and Reliability]: Performance Analysis and Design Aids; C.4 [Performance of Systems] -- modelling techniques; D.2.2 [Software Engineering]: Design Tools and Techniques Key words: formal analysis, bit-state hashing, hardware synthesis, guided model checking Lars Kristiansen and Paul J. Voda Programming Languages Capturing Complexity Classes 89 Abstract We investigate an imperative and a functional programming language. The computational power of fragments of these languages induce two hierarchies of complexity classes. Our first main theorem says that these hierarchies match, level by level, a complexity-theoretic alternating space-time hierarchy known from the literature. Our second main theorems says that a slightly different complexity-theoretic hierarchy (the Goerdt-Seidl hierarchy) also can be captured by hierarchies induced by fragments of the programming languages. Well known complexity classes like LOGSPACE, LINSPACE, CLASSP, PSPACE, etc., occur in the hierarchies. ACM CCS Categories and Subject Descriptors: F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic; F.1.3 [Computation by Abstract Devices]: Complexity Measures and Classes Key words: imperative and functional programming languages, typed \lambda-calculi, higher types, complexity classes, computational complexity Marcel Kyas, Frank S. de Boer and Willem-Paul de Roever A Compositional Trace Logic for Behavioural Interface Specifications 116 Abstract We describe a compositional trace logic for behavioural interface specifications and corresponding proof rules for compositional reasoning. The trace logic is defined in terms of axioms in higher-order logic. This trace logic is applicable to any object-oriented programming language. We treat object creation without observing the explicit act of creation. We prove a soundness result of this approach using the theory of Galois connections. We show the correctness of a specification of the Sieve of Eratosthenes using the proposed method. This notion of compositionality allows the verification of systems during the early stages of a design. ACM CCS Categories and Subject Descriptors: D.2.4 [Software Engineering]: Software/Program Verification -- formal methods; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs -- specification techniques Key words: program specification, interface invariants, object creation Härmel Nestra Transfinite Corecursion 133 Abstract This paper presents theorems which enable to define transfinite semantics for programming languages easily. We call these theorems "corecursion theorems" because they state the existence of a function satisfying certain conditions analogous to the usual corecurrent equations for defining stream-valued functions. Transfinite semantics have been proposed for overcoming the "semantic anomaly" of program slicing. In this paper, we define two example semantics which justify this approach, relying on our corecursion theorems. ACM CCS Categories and Subject Descriptors: D.3.1 [Programming Languages]: Formal Definitions and Theory -- semantics; F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages -- operational semantics; F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic -- set theory; I.2.2 [Artificial Intelligence]: Automatic Programming -- program transforma tion} Key words: corecursion, transfinite semantics, program slicing Ragnhild Kobro Runde, Øystein Haugen and Ketil Stølen Refining UML Interactions with Underspecification and Nondeterminism 157 Abstract STAIRS is an approach to the compositional development of UML interactions, such as sequence diagrams and interaction overview diagrams. An important aspect of STAIRS is the ability to distinguish between underspecification and inherent nondeterminism through the use of potential and mandatory alternatives. This paper investigates this distinction in more detail. Refinement notions explain when (and how) both kinds of nondeterminism may be reduced during the development process. In particular, in this paper we extend STAIRS with guards, which may be used to specify the choice between alternatives. Finally, we introduce the notion of an implementation and define what it means for an implementation to be correct with respect to a specification. ACM CCS Categories and Subject Descriptors: D.2.1 [Software Engineering]: Requirements/Specifications; D.3.1 [Programming Languages]: Formal Definitions and Theory; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs Key words: UML interactions, formal semantics, nondeterminism, underspecification, refinement Todd L. Veldhuizen Language Embeddings that Preserve Staging and Safety 189 Abstract We study embeddings of programming languages into one another that preserve what reductions take place at compile-time, i.e., staging. A certain condition --- what we call a `Turing complete kernel' --- is sufficient for a language to be stage-universal in the sense that any language may be embedded in it while preserving staging. A similar line of reasoning yields the notion of safety-preserving embeddings, and a useful characterization of safety-universality. Languages universal with respect to staging and safety are good candidates for realizing domain-specific embedded languages (DSELs) and `active libraries' that provide domain-specific optimizations and safety checks. ACM CCS Categories and Subject Descriptors: D.3.3 [Programming Languages]: Language Constructs and Features Key words: staging, metaprogramming, extensible languages VOLUME 12 NUMBER 3 FALL 2005 SELECTED PAPERS OF THE SECOND INTERNATIONAL WORKSHOP ON MODEL-BASED METHODOLOGIES FOR PERVASIVE AND EMBEDDED SOFTWARE (MOMPES'05), JUNE 6, 2005, RENNES, FRANCE: Johan Lilius, Ricardo J. Machado, Dragos Truscan, João M. Fernandes and Ivan Porres Guest Editors' Foreword 199 Issam Al-Azzoni, Douglas G. Down and Ridha Khedri Modeling and Verification of Cryptographic Protocols Using Coloured Petri Nets and Design/CPN 201 Abstract In this paper, we present a technique to model and analyse cryptographic protocols using coloured Petri nets. A model of the protocol is constructed in a top-do wn manner: first the protocol is modeled without an intruder, then a generic int ruder model is added. The technique is illustrated on the TMN protocol, with several mechanisms introduced to reduce the size of the occurrence graph. A smaller occurrence graph facilitates deducing whether particular security goals are met. ACM CCS Categories and Subject Descriptors: D.4.6 [Security and Protection]: Authentication, Verification; D.2.2 [Design Tools and Techniques]: Petri nets; F.4.3 [Mathematical Logic and Formal Languages]: Formal Languages; C.2.2 [Network Protocols]: Protocol Verification; Key words: cryptographic protocols, protocol analysis, coloured Petri nets, design/CPN, security goals João Paulo Barros and Jens Bæk Jørgensen A Case Study on Coloured Petri Nets in Object-Oriented Analysis and Design 229 Abstract In this paper, we first demonstrate how a coloured Petri nets (CPN) model can be used to capture requirements for a considered example system, an elevator controller. Then, we show how this requirements-level CPN model is transformed into a design-level object-oriented CPN model, which is structurally and conceptually closer to class diagrams and object-oriented programming languages. The CPN models reduce the gap between user-level requirements and the respective implementation, thus simplifying the implementation or code generation. Finally, we discuss the code generation from object-oriented CPN models. ACM CCS Categories and Subject Descriptors: D.2.2 [Software Engineering]: Design Tools and Techniques -- Petri nets Key words: requirements engineering, executable use cases, model transformation Jonathan Billington, Guy Edward Gallasch and Laure Petrucci Fast Verification of the Class of Stop-and-Wait Protocols Modelled by Coloured Petri Nets 251 Abstract Most protocols contain parameters, such as the maximum number of retransmissions in an error recovery protocol. These parameters are instantiated with values that depend on the operating environment of the protocol. We would therefore like our formal specification or model of the system to include these parameters symbolically, where in general each parameter will have an arbitrary upper limit. The inclusion of parameters results in an infinite family of finite state systems, which makes verification difficult. However, techniques and tools are being developed for the verification of parametric and infinite state systems. We explore the use of one such tool, FAST, for automatically verifying several properties (such as channel bounds and the stop-and-wait property of alternating sends and receives) of the stop-and-wait class of protocols, where the maximum number of retransmissions and the maximum sequence number are considered as unbounded parameters. Coloured Petri nets (CPNs), an expressive language for representing protocols, is used to model this stop-and-wait class. However, FAST's foundation is counter systems, automata where states are a vector of non-negative integers and with operations limited to Presburger arithmetic. We therefore also present some first steps in transforming CPNs to counter systems in the context of stop-and-wait protocols operating over unbounded FIFO channels. ACM CCS Categories and Subject Descriptors: C.2.2 [Computer-Communication Networks]: Network Protocols; I.6.4 [Simulation and Modeling]: Model Validation and Analysis; F.1.1 [Computation by Abstract Devices]: Models of Computation Key words: protocols, automated parametric verification, Coloured Petri Nets, counter systems, FAST GENERAL PAPERS: Erez Petrank and Dror Rawitz The Hardness of Cache Conscious Data Placement 275 Abstract We investigate the complexity of finding an optimal placement of objects (or code) in the memory, in the sense that this placement reduces the number of cache misses during program execution to the minimum. We show that this problem is one of the toughest amongst the interesting NP optimization problems in computer science. In particular, suppose one is given a sequence of memory accesses and one has to place the data in the memory so as to minimize the number of cache misses for this sequence. We show that if P \neq NP, then one cannot efficiently approximate the optimal solution even up to a very liberal approximation ratio. Thus, this problem joins the family of extremely inapproximable optimization problems. Two famous members in this family are minimum graph coloring and maximum clique. In light of this harsh lower bound only mild approximation ratios can be obtained. We provide an algorithm that can map arbitrary access sequences within such a mild ratio. Next, we study the information loss when compressing the access sequence, keeping only pairwise relations. We show that the reduced information hides the optimal solution and highlights solutions that are far from optimal. Furthermore, we show that even if one restricts his attention to pairwise information, finding a good placement is computationally difficult. ACM CCS Categories and Subject Descriptors: D.3.4 [Programming Languages]: Processors -- memory management (garbage collection), optimization; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems -- computations on discrete structures, sequencing and scheduling; G.2.1 [Discrete Mathematics]: Combinatorics -- combinatorial algorithms Key words: approximation algorithm, cache conscious code rearrangement, cache conscious data placement, computational complexity, hardness of approximation, memory management Elena Prieto and Christian Sloper Reducing to Independent Set Structure -- the Case of k-Internal Spanning Tree 308 Abstract The k-Internal Spanning Tree problem asks whether a certain graph G has a spanning tree with at least k internal vertices. Basing our work on the results presented in [Prieto and Sloper 2003], we show that there exists a set of reduction rules that modify an arbitrary spanning tree of a graph into a spanning tree with no induced edges between the leaves. Thus, the rules either produce a tree with many internal vertices, effectively deciding the problem, or they identify a large independent set, the leaves, in the graph. Having a large independent set is beneficial, because then the graph allows both `crown decompositions' and path decompositions. We show how this crown decomposition can be used to obtain a O(k^2) kernel for the k-Internal Spanning Tree problem, improving on the O(k^3) kernel presented in [Prieto and Sloper 2003]. ACM CCS Categories and Subject Descriptors: F.2 [Analysis of Algorithms and Problem Complexity] Key words: spanning trees, fixed-parameter tractability, independent set, crown decomposition, kernelization VOLUME 12 NUMBER 4 WINTER 2005 SELECTED PAPERS OF THE THIRD NORDIC WORKSHOP ON UML AND SOFTWARE MODELING (NWUML'2005), AUGUST 29-31, 2005: Kai Koskimies and Ludwik Kuzniarz Guest Editors' Foreword 319 Marcus Alanen, Torbjörn Lundkvist and Ivan Porres Comparison of Modeling Frameworks for Software Engineering 321 Abstract In this article, we compare several modeling frameworks: the Graph eXchange Language (GXL), the Meta Object Facility (MOF) and the ECORE metametamodel from the Eclipse Modeling Framework (EMF). They are all approaches for describing information, specifically for the description of artifacts created during software development. The expressive power of each framework determines what kinds of artifacts and their relationships between each other that can be created. Although there are several traits in common, some differences can also be found. We discuss the benefits and drawbacks of these differences. ACM CCS Categories and Subject Descriptors: H.2.3 [Database Management]: Languages -- data description languages; D.3.3 [Programming Languages]: Language Constructs and Features -- classes and objects; F.3.3 [Logics and Meanings of Programs]: Studies of Program Constructs -- object-oriented constructs Key words: object graphs, information interchange, MOF, GXL, ECORE, XMI, XML, EMF Lukasz Dobrzanski and Ludwik Kuzniarz Practical Refactoring of Executable UML Models 343 Abstract One of the inevitable negative effects of software evolution is design erosion. Refactoring is a technique that aims at counteracting this phenomenon by successively improving design of software without changing its observable behaviour. This paper presents an overview of recent approaches to UML model refactoring, followed by results of an initial study on refactoring of executable UML models, i.e. models that are detailed enough to be automatically compiled to executable applications. It focuses on identification of refactoring areas in models built in Telelogic TAU, and it contains a description of application of several different refactorings to an exemplary executable model. ACM CCS Categories and Subject Descriptors: D.2.7 [Software Engineering]: Distribution, Maintenance, and Enhancement -- restructuring, reverse engineering, and reengineering Key words: executable UML, model refactoring Johan Lilius, Tomas Lillqvist, Torbjörn Lundkvist, Ian Oliver, Ivan Porres, Kim Sandström, Glen Sveholm and Asim Pervez Zaka An Architecture Exploration Environment for System on Chip Design 361 Abstract The MICAS architecture is a novel System-on-Chip (SoC) architecture for mobile phone peripherals. Characteristic for MICAS is the separation of data-flows from control-flows. To design MICAS systems we have implemented a tool that is based on UML and MDA. The tool makes it possible to visually develop new designs that can then be transformed into a SystemC simulator for further testing. The MICAS tool is implemented using Coral. Coral is a metamodel-independent software platform to create, edit and transform new models and metamodels at run-time. Coral provides support for a number of languages, including UML and MOF. ACM CCS Categories and Subject Descriptors: C.3 [Special-Purpose and Application-Based Systems] -- real-time and embedded systems; C.5.4 [Computer System Implementation]: VLSI Systems; D.2.2 [Software Engineering]: Design Tools and Techniques -- computer-aided software engineering Key words: System-on-Chip (SoC), Model Based Development (MBD), Unified Modeling Language (UML), software modeling tools Sven Wenzel Automatic Detection of Incomplete Instances of Structural Patterns in UML Class Diagrams 379 Abstract An approach for the detection of structural patterns based on UML class diagrams is presented. By using a fuzzy-like evaluation mechanism the introduced approach is able to recognize not only entire patterns but also incomplete instances. Referring to structural patterns in general the knowledge about used patterns assists a developer not only while maintaining or reverse engineering existing software, but already while designing or implementing new software. The information about the instantiation status is essential for a developer using, for example, specialization patterns that guide the extension of a particular framework. The developer can be supported by information about already instantiated patterns as well as partial instances which obviously occur rather often while developing. ACM CCS Categories and Subject Descriptors: I.5 [Pattern Recognition]; D.2.7 [Software Engineering]: Distribution, Maintenance, and Enhancement -- reverse engineering Key words: UML, pattern detection, reverse engineering, forward engineering, incomplete instances Author Index Volume 12 (2005) 395 VOLUME 11 NUMBER 1 SPRING 2004 Martin Gairing, Robert M. Geist, Stephen T. Hedetniemi and Petter Kristiansen A Self-stabilizing Algorithm for Maximal 2-packing 1 Abstract In the self-stabilizing algorithmic paradigm for distributed computing each node has only a local view of the system, yet in a finite amount of time the system converges to a global state, satisfying some desired property. In a graph G = (V, E), a subset S \subseteq V is a 2-packing if all nodes in S lie at distance three or more from each other, counting the number of edges. In this paper we present an ID-based, self-stabilizing algorithm for finding a maximal 2-packing, a non-local property, in an arbitrary graph. We also show how to use Markov analysis to analyse the behaviour of a non-ID-based version of the algorithm. ACM CCS Categories and Subject Descriptors: C.2.4 [Computer-Communication Networks]: Distributed Systems; D.1.3 [Programming Techniques]: Concurrent Programming; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems; G.2.2 [Discrete Mathematics]: Graph Theory; G.3 [Probability and Statistics] -- Markov processes Key words: self-stabilizing algorithms, 2-packing, Markov analysis Terje Sivertsen Undefinedness vs. Underspecification in HALDEN ASL 12 Abstract This paper discusses aspects related to evaluation of terms in HALDEN ASL, the algebraic specification language supported by the HALDEN Prover. Particular emphasis is given to the treatment of undefinedness and underspecification, relating to partial and total functions, respectively. The paper first provides an overview of HALDEN ASL and its application in the specification of HALDEN Prover, and compares the treatment of partiality with approaches followed in other languages. The specification of the HALDEN Prover is used as a basis for presenting the approach to term evaluation, including the treatment of partiality, strictness, undefinedness propagation, and underspecified total functions. ACM CCS Categories and Subject Descriptors: D.1.1 [Programming Techniques]: Applicative Programming; D.2.1 [Software Engineering]: Requirements/Specifications -- methodologies; D.2.4 [Software Engineering]: Software/Program Verification -- formal methods}; D.3.3 [Programming Languages]: Language Constructs and Features -- abstract data types Key words: algebraic specification, formal software development, specification, term rewriting, undefinedness propagation Zdenek Tronícek Episode Directed Acyclic Subsequence Graph 35 Abstract Given two strings, pattern P and text T, episode substring of P in T is a minimal substring of T that contains P as a subsequence. The episode matching problem is to find all episode substrings. We present a new data structure called Episode Directed Acyclic Subsequence Graph that can be used to solve the problem very quickly. With the structure, we can find for example all episode substrings or the shortest episode substring in O(mw) time where m is the length of the pattern and w is the number of episode substrings found. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems -- pattern matching Key words: episode matching, subsequences, finite automata Alexandru Berlea and Helmut Seidl Binary Queries for Document Trees 41 Abstract Motivated by XML applications, we address the problem of answering k-ary queries, i.e. simultaneously locating k nodes of an input tree as specified by a given relation. In particular, we discuss how binary queries can be used as a means of navigation in XML document transformations. We introduce a grammar-based approach to specifying $k$-ary queries. An efficient tree-automata based implementation of unary queries is reviewed and the extensions needed in order to implement k-ary queries are presented. In particular, an efficient solution for the evaluation of binary queries is provided and proven correct. We introduce fxgrep, a practical implementation of unary and binary queries for XML. By means of fxgrep and of the fxt XML transformation language we suggest how binary queries can be used in order to increase expressivity of rule-based transformations. We compare our work with other querying languages and discuss how our ideas can be used for other existing settings. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems; I.7 [Document and Text Processing]; H.3 [Information Storage and Retrieval] Key words: XML, tree automata, querying, transforming VOLUME 11 NUMBER 2 SUMMER 2004 SELECTED PAPERS OF THE FIFTEENTH NORDIC WORKSHOP ON PROGRAMMING THEORY (NWPT'03), OCTOBER 29--31, 2003: Kaisa Sere and Marina Waldén Guest Editors' Foreword 73 Pontus Boström and Marina Waldén Implementation of Control Systems Using B Action Systems: A Case Study 75 Abstract In this paper we present a methodology for implementing reactive control systems of industrial size using formal methods. The methodology is applied in a case study from the healthcare technology field. We use B Action Systems as our theoretical framework for developing reliable and correct control systems in a stepwise manner. For proving the correctness of each development step we rely on the tool support provided for the B Method. With the tool the formally developed system can be translated to a programming language. Hence, the implementation method provides a precise mapping from the speci\-fication to the code executed on the computer. This is needed especially in industry for developing large correct systems. In the case study we develop software for part of a\linebreak[4] microplate liquid handling workstation. The design methodology has previously been used for specifying control systems of industrial size, but here we extend the methodology to also consider code generation issues for such systems. ACM CCS Categories and Subject Descriptors: D.2.4 [Software Engineering]: Software/Program Verification; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs; C.3 [Special-Purpose and Application-Based Systems] Key words: B Method, Action Systems, discrete control systems, stepwise refinement, implementation, industrial applications Orieta Celiku and Annabelle McIver Cost-Based Analysis of Probabilistic Programs Mechanised in HOL 102 Abstract We provide a HOL formalisation for analysing expected time bounds for probabilistic programs. Our formalisation is based on the quantitative program logic of Morgan et al. [21] and McIver's extension of it [17] to include performance-style operators. In addition we provide some novel results based on probabilistic data refinement which we use to improve the utility of the basic method. ACM CCS Categories and Subject Descriptors: F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs -- mechanical verification, logics of programs; F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages -- program analysis Key words: quantitative program logic, expected time bounds, probability, nondeterminism, program refinement, HOL Elena Fersman and Wang Yi A Generic Approach to Schedulability Analysis of Real-Time Tasks 129 Abstract In off-line schedulability tests for real time systems, tasks are usually assumed to be periodic, i.e. they are released with fixed rates. To relax the assumption of complete knowledge on arrival times, we propose to use timed automata to describe task arrival patterns. In a recent work, it is shown that for fixed priority scheduling strategy and tasks with only timing constraints (i.e. execution time and deadline), the schedulability of such models can be checked by reachability analysis on timed automata with two clocks. In this paper, we extend the above result to deal with precedence and resource constraints. This yields a unified task model, which is expressive enough to describe concurrency, synchronization, and tasks that may be periodic, aperiodic, preemptive or non-preemptive with (or without) combinations of timing, precedence, and resource constraints. We present an operational semantics for the model, and show that the related schedulability analysis problem can be solved efficiently using the same technique. The presented results have been implemented in the Times tool for automated schedulability analysis. ACM CCS Categories and Subject Descriptors: D.2.2 [Software Engineering]: Design Tools and Techniques; D.2.4 [Software Engineering]: Software/Program Verification; D.3.1 [Programming Languages]: Formal Definitions and Theory; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs; F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages Key words: real-time systems, embedded systems, timed automata, schedulability, precedence constraints, resource constraints Timo Latvala and Heikki Tauriainen Improved On-the-fly Verification with Testers 148 Abstract We present a new memory efficient algorithm for on-the-fly verification of labelled transition systems (LTSs) with testers. To our knowledge, this is the first thoroughly presented solution for verifying all properties specifiable with testers. The algorithm requires four passes of the state space of the composition of the LTS and the tester. ACM CCS Categories and Subject Descriptors: F.1.2 [Computation by Abstract Devices]: Modes of Computation -- interactive and reactive computation; D.2.4 [Software Engineering]: Software/Program Verification -- formal methods Key words: testers, on-the-fly verification, algorithms Rimvydas Ruksenas A Rigourous Environment for Development of Concurrent Systems 165 Abstract We present a theorem prover based environment which supports compositional refinement of action systems. Our emphasis is on the stepwise refinement of action system interfaces. This allows us to consider stepwise refinements of action systems as a component-oriented development. Our tool is an extension of the Refinement Calculator which is based on the window inference style of reasoning. Hence, we also describe the extensions of the HOL window inference tool, developed to support action system refinement but also applicable in a broader context. ACM CCS Categories and Subject Descriptors: F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs -- specification techniques, mechanical verification; D.2.4 [Software Engineering]: Software/Program Verification -- formal methods, correctness proofs; D.2.2 [Software Engineering]: Design Tools and Techniques -- modules and interfaces; D.1.3 [Programming Techniques]: Concurrent Programming Key words: concurrent systems, stepwise refinement, refinement tools, theorem provers, compositionality Gerardo Schneider Computing Invariance Kernels of Polygonal Hybrid Systems 194 Abstract Polygonal hybrid systems are a subclass of planar hybrid automata which can be represented by piecewise constant differential inclusions. One way of analysing such systems (and hybrid systems in general) is through the study of their phase portrait, which characterise the systems' qualitative behaviour. In this paper we identify and compute an important object of polygonal hybrid systems' phase portrait, namely invariance kernels. An invariant set is a set of points such that any trajectory starting in such point keep necessarily rotating in the set forever and the invariance kernel is the largest of such sets. We show that this kernel is a non-convex polygon and we give a non-iterative algorithm for computing the coordinates of its vertexes and edges. Moreover, we show some properties of such systems' simple cycles. ACM CCS Categories and Subject Descriptors: C.3 [Special-Purpose and Application-Based Systems]; D.2.4 [Software Engineering]: Software/Program Verification; J.2 [Physical Sciences and Engineering] Key words: hybrid systems, differential inclusions, invariance kernel, algorithmic analysis VOLUME 11 NUMBER 3 FALL 2004 SELECTED PAPERS OF THE ELEVENTH NORDIC WORKSHOP ON PROGRAMMING ENVIRONMENT RESEARCH (NWPER'2004), AND THE SECOND NORDIC WORKSHOP ON UML, MODELING, METHODS AND TOOLS (NWUML'2004), AUGUST 17--20, 2004: Johan Lilius and Kasper Østerbye Guest Editors' Foreword: Programming and Software Development Environment Research in the Nordic Countries 211 Anders Nilsson, Anders Ive, Torbjörn Ekman and Görel Hedin Implementing Java Compilers Using ReRAGs 213 Abstract Rewritable Reference Attributed Grammars (ReRAGs) is a recently developed compiler-compiler technology based on object-orientation, aspect-orientation, reference attributed grammars, and conditional rewrites. In this paper we describe our experiences from using ReRAGs for implementing Java compilers. We illustrate how the usage of ReRAGs renders a rather compact, yet easy-to-understand and modular compiler specification where code analysis, restructurings, and optimizations can be conveniently described as aspects performing computations and transformations on the abstract syntax tree. Currently, we have implemented two compilers: one that generates C code with real-time support, and one that generates Java bytecode. Both share the same front end. ACM CCS Categories and Subject Descriptors: D.1.5 [Programming Techniques]: Object-oriented Programming; D.2.13 [Software Engineering]: Reusable Software; D.3.4 [Programming Languages]: Processors -- code generation, translator writing systems and compiler generators Key words: Java compiler, modular implementation, declarative implementation, aspect-oriented programming, conditional rewrites, attribute grammars, reference attributes Petri Selonen, Mika Siikarla, Kai Koskimies and Tommi Mikkonen Towards the Unification of Patterns and Profiles in UML 235 Abstract Patterns have become a popular means to express recurring software solutions, as exemplified by design patterns. On the other hand, so-called profiles are used in UML to define various kinds of domain-specific concepts and conventions as extensions of the UML metamodel. In this paper, we show how patterns and profiles can be unified in the UML context, using the UML metamodel as the common basis. We argue that this result has far-reaching implications on tool development, and helps us to understand the relationships of these two central software modeling concepts. ACM CCS Categories and Subject Descriptors: D.2.1 [Software Engineering]: Requirements/Specifications -- methodologies, tools; D.2.2 [Software Engineering]: Design Tools and Techniques Key words: UML, design patterns, profiles Miroslaw Staron, Ludwik Kuzniarz and Ludwik Wallin Case Study on a Process of Industrial MDA Realization -- Determinants of Effectiveness 254 Abstract The Model Driven Architecture (MDA) initiative gains a considerable and growing amount of interest in both academia and industry with the Unified Modeling Language (UML) placed in the center of MDA. A successful MDA realization in industry is determined by certain factors. In this paper, we identify key factors for the efficient accomplishment of the MDA by means of an industrial case study. The case study is performed in a context of a large IT company with distributed development units developing complex systems. The factors identified are grouped into two categories - associated with usage and development of an MDA-based framework. The factors stem from different activities performed during the process of MDA realization. We studied the process of MDA realization at the company and we discuss the factors in the context of the process. In addition to the identified process and factors, there was a need to investigate the importance of different elements in the process and assess their perception in the team. The results of a prioritization experiment performed in the course of the study show that language customization plays a central role in MDA even though the customization was initially perceived as a background activity by framework constructors. ACM CCS Categories and Subject Descriptors: D.2.2 [Software Engineering]: Design Tools and Techniques -- object-oriented design methods; K.6.3 [Management of Computing and Information Systems]: Software Management -- software development, software process Key words: MDA, model driven software development, modeling, UML, empirical study Harald Störrle Structured Nodes in UML 2.0 Activities 279 Abstract The upcoming major revision of the UML (see [OMG. 2003b. OMG Unified Modeling Language: Superstructure (final adopted spec, version 2.0, 2003-01-02). Tech. report, Object Management Group.] has introduced significant changes and additions to "the lingua franca of Software Enginee ring". Within the UML, activity diagrams are particularly prominent, since they are the natural choice when it comes to the modeling of web-services, workflows, and service-oriented architectures. One of the most novel concepts introduced a re so called structured nodes (StructuredActivityNodes in the metamodel). This concept includes features like loops, expansion regions, collection valued para meters, and data streaming. Building on substantial previous work by the author, the purpose of this paper is to understand better these new concepts and notations, and actually defines a semantics for them. Since the UML standard is still immature in some parts, this article is restricted to those concepts, for which a reliable interpretation is currently possible. This article is followup to [St?örrle, Harald. 2004d. Semantics of Expansion Nodes in UML 2.0 Activities. In Proc. 2nd Nordic Ws. on UML, Modeling, Methods and Tools (NWUML'04)]. ACM CCS Categories and Subject Descriptors: D.2.1 [Software Engineering]: Requirements/Specifications; D.2.2 [Software Engineering]: Design Tools and Techniques; D.2.10 [Software Engineering]: Design; D.3.3 [Programming Languages]: Language Constructs and Features Key words: UML 2.0, activity diagrams, structured nodes, loops, collection-valued parameter s, streaming Thomas Vestdam and Kurt Nørmark Maintaining Program Understanding -- Issues, Tools, and Future Directions 303 Abstract The understanding of a program is a key aspect of software development. The understanding is a prerequisite for the initial development efforts. This paper is concerned with the challenge of maintaining the program understanding with the purpose of supporting later phases in the program life time. One approach to maintaining program understanding is to document decisions and rationales behind a program as informal textual explanations---internal documentation. The starting point of this paper is a particular paradigm for program documentation called Elucidative Programming. As the first contribution of this paper, three key documentation issues are identified on the basis of the experience with Elucidative Programming. Documentation motifs represent thematic elements of software, which typically transverse the structure of the source program files. Documentation proximity characterizes the distance between the documentation and the program. Documentation occasions are points in time for capturing and formulating the understanding of the program. During the years a large number of documentation tools have been developed. As the second contribution of the paper, a number of contemporary documentation tools are reviewed. The tools are selected on basis of relevance for the key documentation issues, and relative to the common attention and interest of the particular tool in the documentation communities. As a conclusion of the paper, and as a final contribution, a number of future directions and challenges are outlined. ACM CCS Categories and Subject Descriptors: D.2.7 [Software Engineering]: Distribution, Maintenance, and Enhancement Key words: elucidative programming, program understanding, program comprehension, contempor ary documentation tools, internal documentation VOLUME 11 NUMBER 4 WINTER 2004 Erkki Sutinen and Jorma Tarhio Approximate String Matching with Ordered q-Grams 321 Abstract Approximate string matching with k differences is considered. Filtration of th e text is a widely adopted technique to reduce the text area processed by dynamic programming. We present sublinear filtration algorithms based on the locations of q-grams in the pattern. Samples of q-grams are drawn from the text at fixed periods, and only if consecutive samples appear in the pattern approximately in the same configuration, the text area is examined with dynamic programming. The algorithm LEQ searches for exact occurrences of the pattern q-grams, where as the algorithm LAQ searches for approximate occurrences of them. In addition, a static variation of LEQ is presented. The focus of the paper is on combinatori al properties of the sampling approach. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms a nd Problems Key words: string matching, filtration, q-grams, indexing Sergey Bereg and Michael Segal Dynamic Algorithms for Approximating Interdistances 344 Abstract In this paper we present efficient dynamic algorithms for approximation of k^{th}, 1 <= k <= {n}\choose{2} distance defined by some pair of points from a given set S of n points in d-dimensional space. Our technique is based on the dynamization of well-separated pair decomposition proposed in [Callahan, P. and Kosaraju, R. 1995. A Decomposition of Multidimensional Point Sets with Applications to k-Nearest Neighbors and n-Body Potential Fields. Journal of ACM 42, 1, 67--90.], computing approximate nearest and farthest neighbors [Indyk, P. 2000. High-dimensional Computational Geometry. Ph.D. thesis, Stanford University, 68--70.], [Kapoor, S. and Smid, M. 1996. New Techniques for Exact and Approximate Dynamic Closest-Point Problems. SIAM Journal on Computing 25, 4, 775--796.] and use of persistent search trees [Driscoll, J., Sarnak, N., Sleator, D., and Tarjan, R. 1989. Making data structures persistent. Journal of Computer and System Sciences 38, 1, 86--124.]. ACM CCS Categories and Subject Descriptors: G.4 [Mathematical Software] -- algorithm design and analysis Key words: distance selection, persistency, well-separated pair decomposition, approximate neighbors Gonzalo Navarro Approximate Regular Expression Searching with Arbitrary Integer Weights 356 Abstract We present a bit-parallel technique to search a text of length $n$ for a regular expression of m symbols permitting k differences in worst case time O(mn/log_k s), where s is the amount of main memory that can be allocated. The algorithm permits arbitrary integer weights and matches the complexity of the best previous techniques, but it is simpler and faster in practice. In our way, we define a new recurrence for approximate searching where the current values depend only on previous values. Interestingly, our algorithm turns out to be a relevant option also for simple approximate string matching with arbitrary integer weights. ACM CCS Categories and Subject Descriptors: E.1 [Data Structures]; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems -- computations on discrete structures, pattern matching, sorting and searching; H.3 [Information Storage and Retrieval] Key words: approximate string matching, string matching with differences, regular expression searching, bit-parallelism, computational biology Hans L. Bodlaender and Jan Arne Telle Space-Efficient Construction Variants of Dynamic Programming 374 Abstract Many dynamic programming algorithms that solve a decision problem can be modified to algorithms that solve the construction variant of the problem by additional bookkeeping and going backwards through stored answers to subproblems. It is also well known that for many dynamic programming algorithms, one can save memory space by throwing away tables of information that is no longer needed, thus reusing the memory. Somewhat surprisingly, the observation that these two modifications cannot be combined is frequently not made. In this paper we consider the case of dynamic programming algorithms on graphs of bounded treewidth. We give algorithms to solve the construction variants of such problems that use only twice the amount of memory space of the decision versions, with only a logarithmic factor increase in running time. Using the concept of strong directed treewidth we then discuss how these algorithms can be applied to dynamic programming in general. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems -- computations on discrete structures; E.1 [Data Structures] -- graphs and networks; F.2.3 [Analysis of Algorithms and Problem Complexity]: Tradeoffs between Complexity Measures; G.2.2 [Discrete Mathematics]: Graph Theory -- graph algorithms, trees Key words: algorithms and data structures, dynamic programming, memory use of algorithms, treewidth Author Index Volume 11 (2004) 386 VOLUME 10 NUMBER 1 SPRING 2003 SELECTED PAPERS OF THE PRAGUE STRINGOLOGY CONFERENCE (PSC'02), SEPTEMBER 23--24, 2002: Jan Holub Guest Editor's Foreword 1 Kensuke Baba, Ayumi Shinohara, Masayuki Takeda, Shunsuke Inenaga and Setsuo Arikawa A Note on Randomized Algorithm for String Matching with Mismatches 2 Abstract Atallah et al. introduced a randomized algorithm for string matching with mismatches, which utilized fast Fourier transformation (FFT) to compute convolution. It estimates the score vector of matches between text string and a pattern string, that is, the vector obtained when the pattern is slid along the text and the number of matches is counted for each position. In this paper, we simplify the algorithm and give an exact analysis of the variance of the estimator. ACM CCS Categories and Subject Descriptors: F.2 [Analysis of Algorithms and Problem Complexity] Key words: pattern matching, mismatch, FFT, convolution, randomized algorithm Luigi Cinque, Sergio De Agostino and Franco Liberati A Work-Optimal Parallel Implementation of Lossless Image Compression by Block Matching 13 Abstract Storer suggested that fast encoders are possible for two-dimensional lossless compression by showing a square greedy matching LZ1 heuristic for bi-level images, which can be implemented by a simple hashing scheme [Storer 1996]. Rectangle matching improves the compression performance, but it is slower since it requires O(M log M) time for a single match, where M is the size of the match [Storer and Helfgott 1997]. Therefore, the sequential time to compress an image of size n by rectangle matching is Omega(n log M). In this paper, we show a work-optimal parallel algorithm using a rectangle greedy matching technique requiring O(log M log n) time on the PRAM EREW model. ACM CCS Categories and Subject Descriptors: I.4.2 [Image Processing and Computer Vision]: Compression (Coding); E.4 [Coding and Information Theory] Key words: lossless image compression, sliding dictionary, parallel algorithm, PRAM EREW Frantisek Franek, W. F. Smyth and Xiangdong Xiao A Note on Crochemore's Repetitions Algorithm - A Fast Space-Efficient Approach 21 Abstract The space requirement of Crochemore's repetitions algorithm is generally estimated to be about 20mn bytes of memory, where n is the length of the input string and m the number of bytes required to store the integer n. The same algorithm can also be used in other contexts, for instance to compute the suffix tree of the input string in O(n log n) time for the purpose of data compression. In such contexts the large space requirement of the algorithm is a significant drawback. There are of course several newer space-efficient algorithms with the same time complexity that can compute suffix trees or arrays. However, in actual implementations, these algorithms may not be faster than Crochemore's. Therefore, we consider it interesting enough to describe a new approach based on the same mathematical principles and observations that were put forth in Crochemore's original paper, but whose space requirement is 10mn bytes. Additional advantages of the approach are the ease with which it can be implemented in C/C++ and the apparent speed of such an implementation in comparison to other implementations of the original algorithm. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems -- pattern matching Key words: string, repetition, repetition algorithm, suffix tree, implementation, space complexity Heikki Hyyrö A Bit-Vector Algorithm for Computing Levenshtein and Damerau Edit Distances 29 Abstract The edit distance between strings A and B is defined as the minimum number of edit operations needed in converting A into B or vice versa. The Levenshtein edit distance allows three types of operations: an insertion, a deletion or a substitution of a character. The Damerau edit distance allows the previous three plus in addition a transposition between two adjacent characters. To our best knowledge the best current practical algorithms for computing these edit distances run in time O(dm) and O(\lceil m/w \rceil (n + \sigma)), where d is the edit distance between the two strings, m and n are their lengths (m \leq n), w is the computer word size and \sigma is the size of the alphabet. In this paper we present an algorithm that runs in time O(\lceil d/w \rceil m + \lceil n/w \rceil \sigma) or O(\lceil d/w \rceil n + \lceil m/w \rceil \sigma). The structure of the algorithm is such, that in practice it is mostly suitable for testing whether the edit distance between two strings is within some pre-determined error threshold. We also present some initial test results with thresholded edit distance computation. In them our algorithm works faster than the original algorithm of Myers. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms a nd Problems -- pattern matching, computations on discrete structures Key words: Levenshtein edit distance, Damerau edit distance, bit-parallelism, approximate string matching Costas S. Iliopoulos, Manal Mohamed, Laurent Mouchard, Katerina G. Perdikuri, W. F. Smyth and Athanasios K. Tsakalidis String Regularities with Don't Cares 40 Abstract We describe algorithms for computing typical regularities in strings x = x[1..n] that contain don't care symbols. For such strings on alphabet \Sigma, an O(n log n log |\Sigma|) worst-case time algorithm for computing the period is known, but the algorithm is impractical due to a large constant of proportionality. We present instead two simple practical algorithms that compute all the periods of every prefix of x; our algorithms require quadratic worst-case time but only linear time in the average case. We then show how our algorithms can be used to compute other string regularities, specifically the covers of both ordinary and circular strings. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems; G.2.1 [Discrete Mathematics]: Combinatorics -- combinatorial algorithms Key words: string algorithm, regularities, don't care, period, border, cover Shunsuke Inenaga Bidirectional Construction of Suffix Trees 52 Abstract String matching is critical in information retrieval since in many cases information is stored and manipulated as strings. Constructing and utilizing a suitable data structure for a text string, we can solve the string matching problem efficiently. Such a structure is called an index structure. Suffix trees are certainly the most widely-known and extensively-studied structure of this kind. In this paper, we present a linear-time algorithm for bidirectional construction of suffix trees. The algorithm proposed is also applicable to bidirectional construction of directed acyclic word graphs (DAWGs). ACM CCS Categories and Subject Descriptors: E.1 [Data Structures]; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems Key words: strings, pattern matching, data structures, suffix trees, DAWGs, linear-time algorithm VOLUME 10 NUMBER 2 SPRING 2003 SELECTED PAPERS OF THE NINTH INTERNATIONAL WORKSHOP ON EXPRESSIVENESS IN CONCURRENCY (EXPRESS'02), AUGUST 19, 2002: Uwe Nestmann and Prakash Panangaden Guest Editors' Foreword 69 Marco Carbone and Sergio Maffeis On the Expressive Power of Polyadic Synchronisation in pi-calculus 70 Abstract We extend the pi-calculus with polyadic synchronisation, a generalisation of the communication mechanism which allows channel names to be composite. We show that this operator embeds nicely in the theory of pi-calculus, we suggest that it permits divergence-free encodings of distributed calculi, and we show that a limited form of polyadic synchronisation can be encoded weakly in pi-calculus. After showing that matching cannot be derived in pi-calculus, we compare the expressivity of polyadic synchronisation, mixed choice and matching. In particular we show that the degree of synchronisation of a language increases its expressive power by means of a separation result in the style of Palamidessi's result for mixed choice. ACM CCS Categories and Subject Descriptors: F.1.2 [Computation by Abstract Devices]: Modes of Computation -- parallelism and concurrency Key words: pi-calculus, expressivity, matching, polyadic synchronisation, distributed systems Joël Ouaknine and James Worrell Timed CSP = Closed Timed epsilon-automata 99 Abstract We propose some mild modifications to the syntax and semantics of Timed CSP which significantly increase expressiveness. As a result, we are able to capture some of the most widely used specifications on timed systems as refinements (reverse inclusion of sets of behaviours) which may then be verified using the model checker FDR. We characterize the expressive power of the finite-state fragment of this augmented version of Timed CSP as that of closed timed epsilon-automata -- timed automata with epsilon-transitions (silent transitions) and closed invariant and enabling clock constraints. ACM CCS Categories and Subject Descriptors: F.1.1 [Computation by Abstract Devices]: Models of Computation Key words: Timed CSP, timed automata, denotational and operational semantics, verification, digitization Maribel Fernández and Lionel Khalil Interaction Nets with McCarthy's amb: Properties and Applications 134 Abstract Interaction nets are graphical rewrite systems which have been successfully used to implement various efficient evaluation strategies in the lambda-calculus (including optimal reduction). However, they are intrinsically deterministic and this prevents from applying these techniques to concurrent languages where non-determinism plays a key role. In this paper we show that a minimal extension --- the addition of one agent in the spirit of McCarthy's amb operator --- allows us to define non-deterministic processes such as angelic and infinity merge, and more generally, to encode process calculi and wide classes of term rewriting systems (including systems defining parallel functions). We show that Alexiev's INMPP (interaction nets with multiple principal ports) can be encoded, for which we give a textual calculus and a type system that ensures the absence of deadlock. ACM CCS Categories and Subject Descriptors: D.3.2 [Programming Languages]: Language Classifications -- concurrent, distributed and parallel languages; F.1.2 [Computation by Abstract Devices]: Modes of Computation -- alternation and non-determinism; F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic -- lambda calculus and related systems Key words: interaction nets, non-determinism, type systems, rewrite systems Vladimiro Sassone and Pawel Sobocinski Deriving Bisimulation Congruences using 2-categories 163 Abstract We introduce G-relative-pushouts (GRPO) which are a 2-categorical generalisation of relative-pushouts (RPO). They are suitable for deriving labelled transition systems (LTS) for process calculi where terms are viewed modulo structural congruence. We develop their basic properties and show that bisimulation on the LTS derived via GRPOs is a congruence, provided that sufficiently many GRPOs exist. The theory is applied to a simple subset of CCS and the resulting LTS is compared to one derived using a procedure proposed by Sewell. ACM CCS Categories and Subject Descriptors: F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs; D.1.3 [Programming Techniques]: Concurrent Programming Key words: bisimulation, congruence, observational and contextual equivalences, reduction and transition systems, relative pushout, 2-category VOLUME 10 NUMBER 3 FALL 2003 Kjell Lemström and Jorma Tarhio Transposition Invariant Pattern Matching for Multi-Track Strings 185 Abstract We consider the problem of multi-track string matching. The task is to find the occurrences of a pattern across parallel strings. Given an alphabet \Sigma of natural numbers and a set S over \Sigma of h strings s^i=s^i_1 \cdots s^i_n for i=1, \ldots, h, a pattern p=p_1 \cdots p_m has such an occurrence at position j of S if p_1=s^{i_1}_j, p_2=s^{i_2}_{j+1}, \ldots , p_m=s^{i_m}_{j+m-1} holds for i_1, \ldots, i_m \in {1, \ldots, h}. An application of the problem is music retrieval where occurrences of a monophonic query pattern are searched in a polyphonic music database. In music retrieval it is even more pertinent to allow invariance for pitch level transpositions, i.e., the task is to find whether there are occurrences of p in S such that the formulation above becomes p_1=s^{i_1}_j+c, p_2=s^{i_2}_{j+1}+c, \ldots, p_m=s^{i_m}_{j+m-1}+c for some constant c. We present several algorithms solving the problem. Our main contribution, the MonoPoly algorithm, is a transposition-invariant bit-parallel filtering algorithm for static databases. After an O(nhe) time preprocessing, it finds candidates for transposition invariant occurrences in time O(n\lceil m/w \rceil + m + d) where w, e, and d denote the size of the machine word in bits and two factors dependent on the size of the alphabet, respectively. A straightforward algorithm is used to check whether the candidates are proper occurrences. The algorithm needs time O(hm) per candidate. ACM CCS Categories and Subject Descriptors: H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems -- pattern matching; J.5 [Computer Applications]: Arts and Humanities -- music Key words: string algorithms, combinatorial pattern matching, bit parallelism, music retrieval Jirí Fiala, Pinar Heggernes, Petter Kristiansen and Jan Arne Telle Generalized H-coloring and H-covering of Trees 206 Abstract We study H(p,q)-colorings of graphs, for H a fixed simple graph and p,q natural numbers, a generalization of various other vertex partitioning concepts such as H-covering. An H-cover of a graph G is a local isomorphism between G and H, and the complexity of deciding if an input graph G has an H-cover is still open for many graphs H. In this paper we show that the complexity of H(2p,q)-COLORING is directly related to these open graph covering problems, and answer some of them by resolving the complexity of H(p,q)-COLORING for all acyclic graphs H and all values of p and q. ACM CCS Categories and Subject Descriptors: G.2.2 [Discrete Mathematics]: Graph Theory -- graph algorithms, trees; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems Key words: H-coloring, H-covering, polynomial-time and NP-complete Michael Segal Placing an Obnoxious Facility in Geometric Networks 224 Abstract In this paper we consider several different problems of placing an obnoxious facility on geometric networks. In particular, our main results show how to obtain efficient polynomial time algorithms for locating an obnoxious facility on the given network under various distance functions such as maximizing the total sum of distances or maximizing the smallest of the distances from the facility to the nodes of the network. Our algorithms are obtained by applying concepts and techniques from Computational Geometry such as range searching, constructing spanners and other optimization schemes. ACM CCS Categories and Subject Descriptors: G.4 [Mathematical Software] -- algorithm design and analysis Key words: geometric networks, facilities, optimization techniques, spanners, range searching Jyrki Katajainen and Fabio Vitale Navigation Piles with Applications to Sorting, Priority Queues, and Priority Deques 238 Abstract A data structure, named a navigation pile, is described and exploited in the implementation of a sorting algorithm, a priority queue, and a priority deque. When carrying out these tasks, a linear number of bits is used in addition to the elements manipulated, and extra space for a sublinear number of elements is allocated if the grow and shrink operations are to be supported. Our viewpoint is to allow little extra space, make a low number of element moves, and still keep the efficiency in the number of element comparisons and machine instructions. In spite of low memory consumption, the worst-case bounds for the number of element comparisons, element moves, and machine instructions are close to the absolute minimum. ACM CCS Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems -- sorting and searching; E.1 [Data Structures] -- lists, stacks, and queues; E.2 [Data Storage Representations] -- linked representations; D.3.2 [Programming Languages]: Language Classifications -- object-oriented languages Key words: algorithms, sorting, data structures, priority queues, priority deques, heaps VOLUME 10 NUMBER 4 WINTER 2003 SELECTED PAPERS OF THE FOURTEENTH NORDIC WORKSHOP ON PROGRAMMING THEORY (NWPT'02), NOVEMBER 20--22, 2002: Magne Haveraaen and Jüri Vain Guest Editors' Foreword 263 Marcin Benke, Peter Dybjer and Patrik Jansson Universes for Generic Programs and Proofs in Dependent Type Theory 265 Abstract We show how to write generic programs and proofs in Martin-Löf type theory. To this end we consider several extensions of Martin-Löf's logical framework for dependent types. Each extension has a universe of codes (signatures) for inductively defined sets with generic formation, introduction, elimination, and equality rules. These extensions are modeled on Dybjer and Setzer's finitely axiomatized theories of inductive-recursive definitions, which also have universes of codes for sets, and generic formation, introduction, elimination, and equality rules. Here we consider several smaller universes of interest for generic programming and universal algebra. We formalize one-sorted and many-sorted term algebras, as well as iterated, generalized, parameterized, and indexed inductive definitions. We also show how to extend the techniques of generic programming to these universes. Furthermore, we give generic proofs of reflexivity and substitutivity of a generic equality test. Most of the definitions in the paper have been implemented using the proof assistant Alfa for dependent type theory. ACM CCS Categories and Subject Descriptors: F.3.3 [Logics and Meanings of Programs]: Studies of Program Constructs -- functional constructs, type structure; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs -- logics of programs; F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic -- lambda calculus and related systems, mechanical theorem proving; D.3.2 [Programming Languages]: Language Classifications -- applicative (functional) languages Key words: generic programming, polytypic programming, Martin-L?öf type theory, dependent types, inductive definitions, algebraic specifications Neil Ghani and Christoph Lüth Rewriting Via Coinserters 290 Abstract This paper introduces a semantics for rewriting that is independent of the data being rewritten and which, nevertheless, models key concepts such as substitution which are central to rewriting algorithms. We demonstrate the naturalness of this construction by showing how it mirrors the usual treatment of algebraic theories as coequalizers of monads. We also demonstrate its naturalness by showing how it captures several canonical forms of rewriting. ACM CCS Categories and Subject Descriptors: F.4.m [Mathematical Logic and Formal Languages]: Miscellaneous -- rewrite systems, term rewriting systems; F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic -- lambda calculus and related systems Key words: rewriting, monads, computation, category theory Einar Broch Johnsen and Christoph Lüth Abstracting Refinements for Transformation 313 Abstract Formal program development by stepwise refinement involves a lot of work discharging proof obligations. Transformational techniques can reduce this work: applying correct transformation rules removes the need for verifying the correctness of each refinement step individually. However, a crucial problem is how to identify appropriate transformation rules. In this paper, a method is proposed to incrementally construct a set of correctness preserving transformation rules for refinement relations in arbitrary specification formalisms. Transformational developments are considered as proofs, which are generalised. This results in a framework where specific example refinements can be systematically generalised to more applicable transformation rules. The method is implemented in the Isabelle theorem prover and demonstrated on an example of data refinement. ACM CCS Categories and Subject Descriptors: F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs; F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic; I.2.2 [Artificial Intelligence]: Automatic Programming Key words: program refinement, transformation, theorem provers, proof reuse Henrik Pilegaard, Michael R. Hansen and Robin Sharp An Approach to Analyzing Availability Properties of Security Protocols 337 Abstract Meadows has shown how availability issues involving security protocols may be treated by analyzing the time-consuming internal actions of agents with respect to fail-stop criteria that are cost dependent. In this paper we present a technique which combines Paulson's inductive approach to protocol analysis with Interval Logic in order to create a verification tool that supports analysis of the kind Meadows proposes. Based on a novel notion of packets and a redefined notion of external events we develop theoretical extensions that enable the inductive analysis method to distinguish active attacks. To supplement the global traces of external events we define an inductive theory of (untimed) local traces of internal actions. Notions of timed global traces and timed local traces are introduced and used to develop a method that allows natural modelling of real-time (cost) properties of security protocols. Proof support for the theory developed here is achieved via encoding in Isabelle/LSILHOL, an Isabelle/HOL environment for (labelled) interval logics. Some small examples of protocols are treated and properties are shown via interactive theorem proving. ACM CCS Categories and Subject Descriptors: D.4.6 [Operating System]: Security and Protection; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning about Programs]; F.4.1 [Mathematical Logic and Formal Languages]: Mathematical Logic Key words: security protocols, availability, modelling, real-time, interval logic, neighbourhood logic, verification, Isabelle/HOL Author Index Volume 10 (2003) 374 VOLUME 9 NUMBER 1 SPRING 2002 Mayer Goldberg and Mads Torgersen How to Circumvent Church Numerals 1 Abstract In this work we consider a standard numeral system in the lambda-calculus, and the elementary arithmetic and Boolean functions and predicates defined on this numeral system, and show how to construct terms that ``circumvent'' or ``defeat'' these functions: The equality predicate is satisfied when comparing these special terms to any numeral, the zero predicate is satisfied for these terms, etc. We believe this exercise offers an instructive look at what definability means in the untyped lambda-calculus. CR Classification: F.4.1 Key words: lambda-calculus, lambda-definability, lambda-solvability Giri Narasimhan and Michiel Smid Approximation Algorithms for the Bottleneck Stretch Factor Problem 13 Abstract The stretch factor of a Euclidean graph is the maximum ratio of the distance in the graph between any two points and their Euclidean distance. The following problem is considered. Preprocess a set S of n points in R^d into a data structure that supports the following queries: Given an arbitrary query value b > 0, compute a constant-factor approximation of the stretch factor of the graph G_b, which is the graph on S containing all edges of length at most b. We give a data structure for this problem having size O(log n) and query time O(loglog n). Even though there could be up to n \choose 2 different stretch factors in the collection { G_b : b > 0 } of graphs, we show that this data structure can be constructed in subquadratic time. Our algorithms use techniques from computational geometry, such as minimum spanning trees, well-separated pairs, data structures for the nearest-neighbor problem, and algorithms for selecting and ranking distances. CR Classification: F.2.2 Key words: computational geometry, approximation algorithms, geometric spanners, well-separated pairs Mikael Hammar, Bengt J. Nilsson and Sven Schuierer Improved Exploration of Rectilinear Polygons 32 Abstract Exploring a polygon is the problem of a robot that does not have a map of its surroundings to see the complete polygon. In other words, for the robot to construct a map of the polygon. Exploration can be viewed as an online problem. Typical for online problems is that the solution method must make decisions based on past events but without knowledge about the future. In our case the robot does not have complete information about the environment. Competitive analysis can be used to measure the performance of methods solving online problems. The competitive factor of such a method is the ratio between the method's performance and the performance of the best method having full knowledge about the future. We prove a 5/3-competitive strategy for exploring a simple rectilinear polygon in the L_1 metric. This improves the previous factor two bound of Deng, Kameda and Papadimitriou. CR Classification: F.2.2 Key words: computational geometry, online algorithms, polygon exploration Maxime Crochemore, Costas Iliopoulos, Christos Makris, Wojciech Rytter, Athanasios Tsakalidis and Kostas Tsichlas Approximate String Matching with Gaps 54 Abstract In this paper we consider several new versions of approximate string matching with gaps. The main characteristic of these new versions is the existence of gaps in the matching of a given pattern in a text. Algorithms are devised for each version and their time and space complexities are stated. These specific versions of approximate string matching have various applications in computerized music analysis. CR Classification: F.2.2 Key words: string algorithms, approximate string matching, dynamic programming, computerized music analysis Kasey N. Klipsch and David S. Wise Blood from Dahm's Turnip 66 Abstract Dahm's compression of the table representing a COBOL data declaration, by Knuth, is compressed still further. CR Classification: D.3.4, E.1 Key words: design, algorithms, multilinked structures, COBOL VOLUME 9 NUMBER 2 SUMMER 2002 SELECTED PAPERS OF THE EIGHTH INTERNATIONAL WORKSHOP ON EXPRESSIVENESS IN CONCURRENCY (EXPRESS'01), AUGUST 20, 2001: Luca Aceto and Prakash Panangaden Guest Editors' Foreword 69 Roberto M. Amadio and Charles Meyssonnier On Decidability of the Control Reachability Problem in the Asynchronous Pi-Calculus 70 Abstract We study the decidability of the control reachability problem for various fragments of the asynchronous Pi-calculus. We consider the combination of three main features: name generation, name mobility, and unbounded control. We show that the combination of name generation with either name mobility or unbounded control leads to an undecidable fragment. On the other hand, we prove that name generation with unique receiver and bounded input (a condition weaker than bounded control) is decidable by reduction to the coverability problem for Petri nets with transfer (and back). CR Classification: F.3 Key words: Pi-calculus, control reachability, Petri nets Julian C. Bradfield and Sibylle B. Fröschle Independence-Friendly Modal Logic and True Concurrency 102 Abstract We consider modal analogues of Hintikka et al.'s independence-friendly first-order logic, and discuss their relationship to equivalences previously studied in concurrency theory. CR Classification: F.4.1, F.3.1 Key words: independence, concurrency, branching quantifiers, modal logic Anne Labroue and Philippe Schnoebelen An Automata-Theoretic Approach to the Reachability Analysis of RPPS Systems 118 Abstract We show how the reachability analysis of RPPS systems can be tackled with the tree-automata techniques proposed by Lugiez and Schnoebelen for PA. This approach requires that we express the states of RPPS systems in RPA, a tailor-made process rewrite system where reachability is a relation recognizable by finite tree-automata. Two outcomes of this study are (1) an NP algorithm for reachability in RPPS systems, and (2) a simple decision procedure for a large class of reachability problems in RPA systems. CR Classification: F.3.1 Key words: verification of infinite-state systems, process algebra, reachability analysis, tree automata, model checking Mogens Nielsen, Catuscia Palamidessi and Frank D. Valencia Temporal Concurrent Constraint Programming: Denotation, Logic and Applications 145 Abstract The tcc model is a formalism for reactive concurrent constraint programming. We present a model of temporal concurrent constraint programming which adds to tcc the capability of modeling asynchronous and nondeterministic timed behavior. We call this tcc extension the ntcc calculus. We also give a denotational semantics for the strongest-postcondition of ntcc processes and, based on this semantics, we develop a proof system for linear-temporal properties of these processes. The expressiveness of ntcc is illustrated by modeling cells, timed systems such as RCX controllers, multi-agent systems such as the Predator/Prey game, and musical applications such as generation of rhythms patterns and controlled improvisation. CR Classification: F.3.1, F.3.2, D.1.3, D.3.3 Key words: temporal concurrent constraint programming, calculi for concurrency, program specification, timed systems, reactive systems, multi-agent systems, musical applications VOLUME 9 NUMBER 3 FALL 2002 SELECTED PAPERS OF THE TENTH NORDIC WORKSHOP ON PROGRAMMING ENVIRONMENT RESEARCH (NWPER'2002), AUGUST 18--20, 2002: Kasper Østerbye Guest Editor's Foreword 189 Erik Ernst Safe Dynamic Multiple Inheritance 191 Abstract Multiple inheritance and similar mechanisms are usually only supported at compile time in statically typed languages. Nevertheless, dynamic multiple inheritance would be very useful in the development of complex systems, because it allows the creation of many related classes without an explosion in the size and level of redundancy in the source code. In fact, dynamic multiple inheritance is already available. The language gbeta is statically typed and has supported run-time combination of classes and methods since 1997, by means of the combination operator PCO. However, with certain combinations of operands the PCO operator fails; as a result, dynamic creation of new classes and methods was considered a dangerous operation in all cases. This paper presents a large and useful category of combinations, and proves that combinations in this category will always succeed. CR Classification: D.3 Key words: multiple inheritance, dynamic mechanisms, static analysis Thomas Vestdam Elucidative Program Tutorials 209 Abstract In this paper we present a tool for creating elucidative program tutorials. An elucidative program tutorial explains a program through textual explanations that address relevant places in a program by using hyperlinks or by in-lining fragments of the program in the tutorial text. We present a means for specifying the fragments of a program that are to be in-lined in the tutorial text. These in-line fragments are defined by addressing named syntactical elements, such as classes and methods, but it is also possible to address individual code lines. The fragment specifications help maintaining consistency between the program tutorial and the program. The tool also detects when a program tutorial is addressing program fragments that do not exist. The program tutorials are presented in a web-browser where navigation is provided to other documents such as reference manuals, other tutorials, and to presentations of source code fragments in their original context. We have produced three example program tutorials in order to get some experience with the tool, and we see potential in using the tool to produce program tutorials for frameworks, libraries, and for use in educational contexts. CR Classification: D.2.7, I.7.2 Key words: documentation, Elucidative Programming, code extraction, Java, Literate Programming Erik Arisholm, Dag I. K. Sjøberg, Gunnar J. Carelius and Yngve Lindsjørn A Web-Based Support Environment for Software Engineering Experiments 231 Abstract The software engineering communities frequently propose new software engineering technologies, such as new development techniques, programming languages and tools, without rigorous scientific evaluation. One way to evaluate software engineering technologies is through controlled experiments where the effects of the technology can be isolated from confounding factors, i.e., establishing cause-effect relationships. For practical and financial reasons, however, such experiments are often quite unrealistic, typically involving students in a class-room environment solving small pen-and-paper tasks. A common criticism of the results of the experiments is their lack of external validity, i.e., that the results are not valid outside the experimental conditions. To increase the external validity of the experimental results, the experiments need to be more realistic. The realism can be increased using professional developers as subjects who conduct larger experimental tasks in their normal work environment. However, the logistics involved in running such experiments are tremendous. More specifically, the experimental materials (e.g., questionnaires, task descriptions, code and tools) must be distributed to each programmer, the progress of the experiment needs to be controlled and monitored, and the results of the experiment need to be collected and analyzed. To support this logistics for large-scale, controlled experiments, we have developed a web-based experiment support environment called SESE. This paper describes SESE, its development and the experiences from using it to conduct a large controlled experiment in industry. CR Classification: D.2 Key words: software engineering, controlled experiment Imed Hammouda and Kai Koskimies A Pattern-Based J2EE Application Development Environment 248 Abstract J2EE is Java's platform for building distributed enterprise applications. The platform takes advantage of a wide range of new and evolving technologies and has been enriched by proven design solutions. These solutions are formulated and documented in what is known as J2EE design patterns. Rather than applying the patterns in isolation, a complete design system can be composed of the patterns. This pattern-based system can significantly ease the development process of J2EE applications and improve the quality of the produced software. In this paper, we will show how a general architectural tool (Fred) can be used to model such a pattern system and to generate an architecture-centric environment for developing J2EE applications. The environment is a task-based wizard that guides the user through the development of the application by enforcing certain design rules. CR Classification: D.2.10, D.2.11, D.2.13 Key words: J2EE, framework, design patterns, software architecture Lars Bendix and Görel Hedin Summary of the Subworkshop on Extreme Programming 261 VOLUME 9 NUMBER 4 WINTER 2002 SELECTED PAPERS OF THE THIRTEENTH NORDIC WORKSHOP ON PROGRAMMING THEORY (NWPT'01), OCTOBER 10--12, 2001: Magne Haveraaen and Michael R. Hansen Guest Editors' Foreword 267 Tobias Amnell, Elena Fersman, Paul Pettersson, Hongyan Sun and Wang Yi Code Synthesis for Timed Automata 269 Abstract We present a framework for the development of real-time embedded systems based on timed automata extended with a notion of real-time tasks. It has been shown previously that reachability and schedulability for such automata can be checked effectively using model checking techniques. In this paper, we propose to use the extended automata as design models. We describe how to compile design models to executable programs with predictable behaviours. The compiling procedure ensures that the execution of the generated code satisfies mixed timing, resource and logical constraints imposed on the design model. To demonstrate the applicability of the framework, a prototype C-code generator based on the legOS operating system has been implemented in the Times tool and applied to develop the control software for a production cell. The production cell has been built in LEGO equipped with a Hitachi H8 based LEGO Mindstorms control brick. CR Classification: C.3, D.1.2, D.2.2, D.2.4, D.2.6, I.2.8, I.6.3, I.6.4 Key words: real-time, timed automata, code synthesis, schedulability, case study Jochen Hoenicke and Ernst-Rüdiger Olderog CSP-OZ-DC: A Combination of Specification Techniques for Processes, Data and Time 301 Abstract CSP-OZ-DC is a new combination of three well researched formal techniques for the specification of processes, data and time: CSP [Hoare 1985], Object-Z [Smith 2000], and Duration Calculus [Zhou et al. 1991]. This combination is illustrated by specifying the train controller of a case study on radio controlled railway crossings. The technical contribution of the paper is a smooth integration of the underlying semantic models and its use for verifying timing properties of CSP-OZ-DC specifications. This is done by combining the model-checkers FDR [Roscoe 1994] for CSP and UPPAAL [Bengtsson et al. 1997] for timed automata with a new tool f2u that transforms FDR transition systems and certain patterns of Duration Calculus formulae into timed automata. This approach is illustrated by the example of a vending machine. CR Classification: D.2.1, D.2.2, D.2.4, F.3.1, F.4.1 Key words: CSP, Object-Z, Duration Calculus, transformational semantics, real-time processes, model-checking, FDR, UPPAAL Flemming Nielson, Helmut Seidl and Hanne Riis Nielson A Succinct Solver for ALFP 335 Abstract We develop a solver algorithm which allows to efficiently compute the stable model of a very expressive fragment of predicate logic. The succinct formulation of the algorithm is due to the disciplined use of continuations and memoisation. This facilitates giving a precise characterisation of the behaviour of the solver and to develop a complexity calculation which allows to obtain its formal complexity. Practical experiments on a control-flow analysis of the ambient calculus show that the solver frequently performs better than the worst-case complexity estimates. CR Classification: D.3, F.3, F.4 Key words: program analysis, alternation-free Least Fixpoint Logic, topdown fixed point computation Eric G. Wagner Algebraic Specifications: some old history and new thoughts 373 Abstract Thirty years ago, in September 1971, the "ADJ Group" (Joe Goguen, Jim Thatcher, Eric Wagner, and Jesse Wright) was founded at IBM Research. In this paper I will sketch the history of the ADJ group, and give an overview of their technical contributions in the area of data type specification (making considerable use of hindsight). The aim here is to provide a unified treatment of the material that displays the mathematical simplicity, the power, and some of the shortcomings, of the approach. This is followed by some thoughts on why ADJ's ideas have not had more effect on software development. Finally there is a section sketching my investigations of program specification that were motivated by these thoughts. CR Classification: D3.1, D.3.3, F.3.1, F.3.2, K.2 Key words: algebraic specification, initial algebra semantics, algebraic development techniques Heike Wehrheim Relating State-based and Behaviour-oriented Subtyping 405 Abstract Subtyping relations for object-oriented formalisms describe relationships between classes which satisfy the substitutability requirement imposed on types and their subtypes. Subtyping relations have been proposed both for state-based and for behaviour-oriented specification formalisms. In this paper we state and proof correspondence results between these two forms of subtyping: starting from a state-based specification language we show that via a behaviour-oriented semantics state-based subtyping induces behaviour- oriented subtyping. In the comparison we study three different kinds of subtyping achieving substitutivity to a different degree. For all three kinds of definitions a correspondence result can be shown. CR Classification: D.2.1, D.3.1, F.1.1, F.3.1, F.3.2 Key words: specification formalisms, object-orientation, refinement, behavioural subtyping Author Index Volume 9 (2002) 436 VOLUME 8 NUMBER 1 SPRING 2001 SELECTED PAPERS OF THE ELEVENTH NORDIC WORKSHOP ON PROGRAMMING THEORY (NWPT'99), OCTOBER 13--15, 1999: Kaisa Sere and Wang Yi Guest Editors' Foreword 1 Ralph-Johan Back, Luigia Petre and Ivan Porres Continuous Action Systems as a Model for Hybrid Systems 2 Abstract Action systems have been used successfully to describe discrete systems, i.e., systems with discrete control acting upon a discrete state space. In this paper we define continuous action systems that extend the action system approach to also model hybrid systems, i.e., systems with d iscrete control over a continuously evolving state. The semantics of continuous action systems is defined in terms of traditional action systems and their properties are proved using standard action systems proof techniques. We describe the essential features of continuous action systems, s how that they can be used to describe a diverse range of hybrid systems and illustrate the framework by a collection of examples. CR Classification: F.3.1, F.4.1, D.2.4 Key words: refinement calculus, action systems, hybrid systems Ana Bove Simple General Recursion in Type Theory 22 Abstract General recursive algorithms are such that the recursive calls are performed on arguments satisfying no condition that guarantees termination. Hence, there is no direct way of formalising them in type theory. The standard way of handling general recursion in type theory uses a well-founded recursion principle. Unfortunately, this way of formalising general recursive algorithms often produces unnecessarily long and complicated codes. On the other hand, functional programming languages like Haskell impose no restrictions on recursive programs, and then writing general recursive algorithms is straightforward. In addition, functional programs are usually short and self-explanatory. However, the existing frameworks for reasoning about the correctness of Haskell-like programs are weaker than the framework provided by type theory. The goal of this work is to present a method that combines the advantages of both programming styles when writing simple general recursive algorithms. The method introduced here separates the computational and logical parts of the definition of an algorithm, which has several advantages. First, the resulting type-theoretic algorithms are compact and easy to understand; they are as simple as their Haskell versions. Second, totality is now a separate task and hence, this method can also be used in the formalisation of partial functions. Third, the method presented here also simplifies the task of formal verification. Finally, it can easily be extended to treat nested and mutual recursion. The main feature of the method is the introduction of an inductive predicate, specially defined for the algorithm to be formalised. This predicate can be thought of as characterising the set of inputs for which the algorithm terminates. It contains an introduction rule for each of the cases that need to be considered and provides an easy syntactic condition that guarantees the termination of the algorithm. CR Classification: F.3.1 Key words: general recursion, type theory, functional programming Thomas Hune, Kim G. Larsen and Paul Pettersson Guided Synthesis of Control Programs Using UPPAAL 43 Abstract In this paper we address the problem of scheduling and synthesizing distributed control programs for a batch production plant. We use a timed automata model of the batch plant and the verification tool UPPAAL to solve the scheduling problem. In modeling the plant, we aim at a level of abstraction which is sufficiently accurate in order that synthesis of control programs from generated timed traces is possible. Consequently, the models quickly become too detailed and complicated for immediate automatic synthesis. In fact, only models of plants producing two batches can be analyzed directly! To overcome this problem, we present a general method allowing the user to guide the model-checker according to heuristically chosen strategies. The guidance is specified by augmenting the model with additional guidance variables and by decorating transitions with extra guards on these. Applying this method have made synthesis of control programs feasible for a plant producing as many as 60 batches. The synthesized control programs have been executed in a physical plant. Besides proving useful in validating the plant model and in finding some modeling errors, we view this final step as the ultimate litmus test of our methodology's ability to generate executable (and executing) code from basic plant models. CR Classification: D.1.2, D.2.2, D.2.4, F.3.1, I.6.4, J.6 Key words: real-time verification, guided model-checking, scheduling, program synthesis, distributed systems Bengt Jonsson, Tiziana Margaria, Gustaf Naeser, Jan Nyström and Bernhard Steffen Incremental Requirement Specification for Evolving Systems 65 Abstract We propose a technique for hierarchically structuring requirement specifications in a way that simplifies change management. Our technique decomposes the process of requirement specification and validation. Specifications, added or modified incrementally on various levels of abstraction, can directly be subject to automatic verification, thus providing early feedback concerning design decisions or the impact of adding or changing service functionality. Thus it is possible to maintain consistency between (incomplete) requirement specifications and the underlying evolving implementation model. CR Classification: D.2.1, I.6.4, I.6.5 Key words: telecommunication services, feature interaction, requirement engineering, change management, specification refinement R. F. Lutje Spelberg and W. J. Toetenel Parametric Real-Time Model Checking Using Splitting Trees 88 Abstract This article discusses a new approach to model checking of real-time systems. One of the novel aspects of our approach is the fact that an unconventional approach is chosen to deal with representing symbolic state spaces. Its key feature is that it does not use a canonical representation for representing symbolic nodes, but an alternative representation based on splitting trees. We describe this approach in terms of the verification problem of parametric reacheability of systems described in an extension of timed automata. Additionally, we describe how we extended this approach to deal with more complex verification problems, namely the parametric verification of an extension of the real-time temporal logic TCTL. This resulted in a model checking tool called PMC. The practical application of our approach is addressed through the analysis and verification of the root contention protocol of the IEEE1394 (FireWire) standard using this tool. CR Classification: D.2.4, C.3, D.4.7 Key words: real-time systems, specification, verification, parametric model checking Simon Mørk Distributed Implementation of a Process-Algebra Based Programming Language for Embedded Systems 121 Abstract This paper deals with software development for a particular class of embedded systems. A timed programming language is presented that retains most of the abstract concepts of a process algebra, yet restricts the problems of constructing actual implementations on embedded system platforms. The implementation is given in terms of a translation from the PMC programming language to a model of implementation platforms, called Imp. CR Classification: D.2, D.3 Key words: embedded systems, process algebra, synchronous communication, concurrency, distributed system, high-level programming languages Mauno Rönkkö and Xuandong Li Linear Hybrid Action Systems 159 Abstract Action Systems is a predicate transformer based formalism. It supports the development of provably correct reactive and distributed systems by refinement. Recently, Action Systems were extended with a differential action. It is used for modelling continuous behaviour, thus, allowing the use of refinement in the development of provably correct hybrid systems, i.e., a discrete controller interacting with some continuously evolving environment. However, refinement as a method is concerned with correctness issues only. It offers very little guidance in what details one should consider during the refinement steps to make the system more robust. That information is revealed by robustness analysis. Other formalisms not supporting refinement do have tool support for automating the robustness analysis, e.g., HyTech for linear hybrid automata. Consequently, we study in this paper the non-trivial translation problem between Action Systems and linear hybrid automata. As the main contribution, we give and prove correct an algorithm that translates a linear hybrid action system to a linear hybrid automaton. With this algorithm we combine the strengths of the two formalisms: we may use HyTech for the robustness analysis to guide the development by refinement. CR Classification: D.2.4, F.1.1, F.3.1, F.3.2, J.7 Key words: action systems, linear hybrid systems, transition systems, linear hybrid automata VOLUME 8 NUMBER 2 SUMMER 2001 Thomas Ottmann, Sven Schuierer and Subbiah Soundaralakshmi Enumerating Extreme Points in Higher Dimensions 179 Abstract We consider the problem of enumerating all extreme points of a given set P of n points in d dimensions. We present a simple and practical algorithm which uses O(n) space and O(nm) time, where m is the number of extreme points of P. Our algorithm is designed to work even for highly degenerate input. We also present an algorithm to compute the depth of each point of the given set of n points in d-dimensions. This algorithm has time complexity O(n^2) which significantly improves the O(n^3) complexity of the naive algorithm. CR Classification: F.2.2 Key words: extreme points, linear programming, computational geometry Kaisa Sere and Marina Waldén Structuring and Verifying Distributed Algorithms 193 Abstract We present a structuring and verification method for distributed algorithms. The basic idea is that an algorithm to be verified is stepwise transformed into a high level specification through a number of steps, so-called coarsenings. At each step some mechanism of the algorithm is identified, verified and removed while the basic computation of the original algorithm is preserved. The method is based on a program development technique called superposition and it is formalized within the refinement calculus. We will show the usefulness of the method by verifying a complex distributed algorithm for minimum-hop route maintenance due to Chu. CR Classification: D.2.4, F.3.1 Key words: verifying programs, understanding programs, distributed systems, superposition refinement, action systems Joachim Gudmundsson, Christos Levcopoulos and Giri Narasimhan Approximating a Minimum Manhattan Network 219 Abstract Given a set S of n points in the plane, we define a Manhattan Network on S as a rectilinear network G with the property that for every pair of points in S, the network G contains the shortest rectilinear path between them. A Minimum Manhattan Network on S is a Manhattan network of minimum possible length. A Manhattan network can be thought of as a graph G=(V,E), where the vertex set V corresponds to points from S and a set of Steiner points S', and the edges in E correspond to horizontal or vertical line segments connecting points in S U S'. A Manhattan network can also be thought of as a 1-spanner (for the L_1 -metric) for the points in S. Let R be an algorithm that produces a rectangulation of a staircase polygon in time R(n) of weight at most r times the optimal. We design an O(n\log n + R(n)) time algorithm which, given a set S of n points in the plane, produces a Manhattan network on S with total weight at most 4r times that of a minimum Manhattan network. Using known rectangulation algorithms, this gives us an O(n^3)-time algorithm with approximation factor four, and an O(n \log n)-time algorithm with approximation factor eight. CR Classification: F.2.2, I.3.5 Key words: computational geometry, approximation algorithms, spanners Hanne Riis Nielson and Flemming Nielson Shape Analysis for Mobile Ambients 233 Abstract The ambient calculus is a calculus of computation that allows active processes to move between sites. We present an analysis inspired by state-of-the-art pointer analyses that safely and accurately predicts which processes may turn up at what sites during the execution of a composite system. The analysis models sets of processes by sets of regular tree grammars enhanced with context dependent counts, and it obtains its precision by combining a powerful redex materialisation with a strong redex reduction (in the manner of the strong updates performed in pointer analyses). CR Classification: D.3, F.3, F.4 Key words: mobile ambients, static analysis, shape analysis VOLUME 8 NUMBER 3 FALL 2001 SELECTED PAPERS OF THE TWELFTH NORDIC WORKSHOP ON PROGRAMMING THEORY (NWPT'00), OCTOBER 11--11, 2000: Magne Haveraaen and Olaf Owe Guest Editors' Foreword 277 Walter Dosch and Sönke Magnussen Computer Aided Fusion for Algebraic Program Derivation 279 Abstract We present an extension of the L?übeck Transformation System LTS automating the fusion of a function with a catamorphism as a refinement step in algebraic program development. The system detects catamorphisms in constructor-based higher-order algebraic specifications and generates an axiomatisation of the composition function. As the basis we give a generalized treatment of the fusion theorem in the setting of algebraic specifications. We illustrate the approach presenting compound fusion transformations for non-free data structures. CR Classification: F.3.1, F.3.3, I.2.2 Key words: computer aided programming, program transformation, higher-order algebraic specification, catamorphism, fusion Yngve Lamo and Michal Walicki Specification of Parameterized Programs -- Persistency Revisited 298 Abstract We revisit the concept of persistent functor pointing out its well known limitations for the purpose of describing the semantics of specifications of parameterized data types (PDTs). We introduce a more general notion of a semantic functor F which requires only that the parameter algebra A is a subalgebra of its image F(A). We illustrate the flexibility and advantages of the proposed construction by examples. The main part of the paper concerns the syntactic restrictions on the specifications which allow one to define the semantics of PDTs in this way. One obtains the possibilities to: 1) preserve the carriers of the parameter algebras (corresponding to the classical persistency), or 2) extending carriers of the parameter algebras. The later situation means that one can, typically, use free functor semantics. In this case, one also has two further options: either 2a) to restrict the validity of the axioms from the parameter specification to apply only to the old elements (from the carriers of the parameter algebras), or 2b) to extend them to apply also to the new elements added to the carrier. We discuss the mechanism of actual parameter passing, actualization of the semantic functors and refer to an earlier work for the counterparts of the vertical and horisontal composition theorems, as well as the notion of refinement of PDTs which amounts to the refinement of the structure of the specified data type. CR Classification: D.2.1, D.2.m, D.3.1, F.3.1 Key words: specification, abstract data types, reusable programs, parameterization, multialgebras Viktor Petersson and Sergei Vorobyov A Randomized Subexponential Algorithm for Parity Games 324 Abstract We describe a randomized algorithm for Parity Games (equivalent to the Mu-Calculus Model Checking), which runs in expected time 2^(O(k^(1/(1+2epsilon))) when k is Omega(n^(1/2+epsilon)), n is the number of vertices, and 0 < epsilon <= 1/2. That is, our algorithm is subexponential in the number of colors k of the game graph provided that k is not too small. All previously known algorithms were exponential in the number of colors, with the best one taking time and space O(k^2 n sqrt(n)^k). Our algorithm does not rely on Linear Programming subroutines and uses a low-degree polynomial space. CR Classification: F.2 Key words: parity games, mu-calculus model checking, decision complexity, subexponential decision algorithm Harald Fecher A Real-Time Process Algebra with Open Intervals and Maximal Progress 346 Abstract Many real-time process algebras have the maximal progress assumption. In those process algebras, the time intervals in which actions are enabled are left-closed. This paper presents a process algebra that satisfies the maximal progress assumption and allows left-open intervals. A non-observable time step is introduced to model the time when an urgent action enabled in interval (0,1) is taken. Furthermore, we have to distinguish between observable actions and actions which only get enabled after a non-observable time step. This is necessary, since the latter actions may only produce internal actions. The distinction is done by extending the set of actions by marked actions. The real-time process algebra presented here is an extension of Milner's CCS. This algebra can be used to model dynamic priority of actions at the same point in time. We introduce various equivalence relations based on bisimulation. CR Classification: F.1.2, F.3.2 Key words: real-time, process algebra, maximal progress, open intervals, bisimulation, priority Tarmo Uustalu, Varmo Vene and Alberto Pardo Recursion Schemes from Comonads 366 Abstract Within the setting of the categorical approach to total functional programming, we introduce a ``many-in-one'' recursion scheme that neatly unifies a variety of seemingly diverging strengthenings of the basic recursion scheme of iteration. The new scheme is doubly generic: in addition to being parametric in a functor capturing the signature of an inductive type, it is also parametric in a comonad and a distributive law (of the functor over the comonad) that together encode the recursive call pattern of a particular recursion scheme for this inductive type. Specializations of the scheme for particular comonads and distributive laws include (simple) iteration and mild generalizations of primitive recursion and course-of-value iteration. CR Classification: D.1.1, D.3.3, F.3.3 Key words: inductive types, iteration, recursion schemes, initial functor-algebras, comonads, distributive laws, functional programming, genericity, category-theoretic Jose Emilio Labra Gayo, Juan Manuel Cueva Lovelle, Maria Cándida Luengo Díez and Agustín Cernuda del Río Modular Development of Interpreters from Semantic Building Blocks 391 Abstract We present an integration of modular monadic semantics and generic programming concepts that facilitates the modular development of interpreters. Using modular monadic semantics, the computational structure is obtained as the composition of several monad transformers where each monad transformer adds a new notion of computation. Using generic programming, it is possible to divide the recursive structure of the abstract syntax in several non-recursive pattern functors. For each functor F, it is possible to define an F-algebra over the computational structure. In this way, an interpreter is automatically obtained as a catamorphism. In this paper we show that it is also possible to use monadic catamorphisms and to combine them with catamorphisms allowing the separation between recursive evaluation from semantic specification. As an example, we define the modular specification of a simple functional programming language with imperative features. CR Classification: D.3.4, F.3.2 Key words: modular monadic semantics, programming languages, interpreters VOLUME 8 NUMBER 4 WINTER 2001 Béla Bollobás, Gautam Das, Dimitrios Gunopulos and Heikki Mannila Time-Series Similarity Problems and Well-Separated Geometric Sets 409 Abstract Given a pair of nonidentical complex objects, defining (and determining) how similar they are to each other is a nontrivial problem. In data mining applications, one frequently needs to determine the similarity between two time series. We analyze a model of time-series similarity that allows outliers, different scaling functions, and variable sampling rates. We present several deterministic and randomized algorithms for computing this notion of similarity. The algorithms are based on nontrivial tools and methods from computational geometry. In particular, we use properties of families of well-separated geometric sets. The randomized algorithm has provably good performance and also works extremely efficiently in practice. CR Classification: H.2.8, I.3.5 Key words: data mining, time-series, similarity measures, geometric sets Patric R. J. Österård A New Algorithm for the Maximum-Weight Clique Problem 424 Abstract Given a graph, in the maximum clique problem one wants to find the largest number of vertices, any two of which are adjacent. In the maximum-weight clique problem, the vertices have positive, integer weights, and one wants to find a clique with maximum weight. A recent algorithm for the maximum clique problem is here used as a basis for developing an algorithm for the weighted case. Computational experiments with random graphs show that this new algorithm is faster than earlier algorithms in many cases. A set of weighted graphs obtained from the problem of constructing good constant weight error-correcting codes are proposed as test cases for maximum-weight clique algorithms; in fact, one of these leads to a new record-breaking code. CR Classification: G.2.2 Key words: branch-and-bound algorithm, clique, graph algorithm Bernd Grobauer and Julia L. Lawall Partial Evaluation of Pattern Matching in Strings, revisited 437 Abstract Specialization of a string matcher is a canonical example of partial evaluation. A naive implementation of a string matcher repeatedly matches a pattern against every substring of the data string; this operation should intuitively benefit from specializing the matcher with respect to the pattern. In practice, however, producing an efficient implementation by performing this specialization using standard partial-evaluation techniques requires non-trivial binding-time improvements. Starting with a naive matcher, we thus present a derivation of such a binding-time improved string matcher. We show that specialization with respect to a pattern yields a matcher with code size linear in the length of the pattern and a running time independent of the length of the pattern and linear in the length of the data string. We then consider several variants of matchers that specialize well, amongst them the first such matcher presented in the literature, and we demonstrate how variants can be derived from each other systematically. CR Classification: D.1.1, D.3.2, D.2.3, D.3.4, F.3.1, F.3.2 Key words: partial evaluation, Knuth-Morris-Pratt string matching, binding-time improvements, functional programming, program transformation, complexity, optimization, reasoning about programs Joan Boyar, Lene M. Favrholdt, Kim S. Larsen and Morten N. Nielsen The Competitive Ratio for On-Line Dual Bin Packing with Restricted Input Sequences 463 Abstract We consider the On-Line Dual Bin Packing problem where we have a fixed number n of bins of equal size and a sequence of items. The goal is to maximize the number of items that are packed in the bins by an on-line algorithm. An investigation of First-Fit and an algorithm called Log shows that, in the special case where all sequences can be completely packed by an optimal off-line algorithm, First-Fit has a constant competitive ratio, but Log does not. In contrast, if there is no restriction on the input sequences, Log is exponentially better than First-Fit. This is the first separation of this sort with a difference of more than a constant factor. We also design randomized and deterministic algorithms for which the competitive ratio is constant on sequences which the optimal off-line algorithm can pack using at most Alpha n bins, if Alpha is constant and known to the algorithm in advance. CR Classification: F.2.2 Key words: on-line algorithms, competitive analysis, bin packing, restricted adversaries, randomization, admission control, accommodating function Note: Jochen Burghardt Maintaining Partial Sums in Logarithmic Time 473 Abstract We present a data structure that allows to maintain in logarithmic time all partial sums of elements of a linear array during incremental changes of elements' values. CR Classification: E.1, G.3 Key words: partial sums, data structures, algorithms Author Index Volume 8 (2001) 475 VOLUME 7 NUMBER 1 SPRING 2000 J. Karhumäki, W. Plandowski and W. Rytter Pattern-Matching Problems for Two-Dimensional Images Described by Finite Automata 1 Abstract The power of weighted finite automata to describe very complex images was widely studied, see [5, 6, 8]. Finite automata can be used as an effective tool for compression of two-dimensional images. There are some software packages using this type of compression, see [6, 13]. We consider the complexity of some pattern-matching problems for two-dimensional images which are highly compressed using finite deterministic and weighted automata as small descriptions of images. Our basic problems are compressed pattern-matching, where the pattern is given explicitly and the text is compressed, and fully compressed pattern-matching (when also the pattern is compressed). We consider also fully compressed pattern-checking: testing of a given occurrence of the compressed pattern in a given position. We prove: 1. Compressed matching for deterministic automata is in P. 2. Compressed matching for weighted automata is NP-complete. 3. Fully compressed pattern-checking for deterministic automata is in P. 4. Fully compressed matching for deterministic automata is NP-complete. CR Classification: F.2.2, E.4 Key words: pattern-matching, compression, automata, two-dimensional texts H. L. Bodlaender and K. Jansen On the Complexity of the Maximum Cut Problem 14 Abstract The complexity of the simple max cut problem is investigated for several special classes of graphs. It is shown that this problem is NP-complete when restricted to one of the following classes: chordal graphs, undirected path graphs, split graphs, tripartite graphs, and graphs that are the complement of a bipartite graph. The problem can be solved in polynomial time, when restricted to graphs with bounded treewidth, or cographs. We also give large classes of graphs that can be seen as generalizations of classes of graphs with bounded treewidth and of the class of cographs, and allow polynomial time algorithms for the simple max cut problem. CR Classification: F.2, G.2 Key words: max cut, graph algorithms, chordal graphs, treewidth, graph cographs J. Gudmundsson and C. Levcopoulos A Parallel Approximation Algorithm for Minimum Weight Triangulation 32 Abstract In this paper we show a parallel algorithm that produces a triangulation which is within a constant factor longer than the Minimum Weight Triangulation (MWT) in time O(log n) using O(n) processors and linear space in the CRCW PRAM model. We present a relaxed version of the quasi-greedy triangulation algorithm which produces edges which are at most (1+epsilon) longer than the shortest diagonal, where epsilon is some positive constant smaller than 1. The analysis shows that the relaxed version still outputs a triangulation which is within a constant factor longer than the minimum weight triangulation. We also show that the approximation behavior of the greedy algorithm may deteriorate dramatically, i.e. Omega(n) longer than a minimum weight triangulation, if the lengths of the edges are not computed with high precision. CR Classification: F.2.2, I.3.5 Key words: computational geometry, approximation algorithms, triangulation M. Zito Linear Time Maximum Induced Matching Algorithm for Trees 58 Abstract A matching in a graph G is a collection of non-intersecting edges. The matching is induced if no two edges in the matching are joined by an edge in G. This paper studies the complexity of finding a largest induced matching when the input graph is a tree. We describe the first linear time algorithm for this problem. CR Classification: G.2.2, F.2.2 Key words: graphs, matching, induced, trees, algorithm VOLUME 7, NUMBER 2, SUMMER 2000 SELECTED PAPERS OF THE EIGHTH NORDIC WORKSHOP ON PROGRAMMING ENVIRONMENT RESEARCH (NWPER'2000), MAY 28-30, 2000: Andreas L. Opdahl and Dag I. K. Sjøberg Guest Editors' Foreword: Programming and Software Development Environment Research in the Nordic Countries 65 Eva Magnusson and Görel Hedin Program Visualization Using Reference Attributed Grammars 67 Abstract This paper describes how attribute grammars can be used to integrate program visualization in language-based environments and how program visualizations can be specified and generated from grammars. It is discussed how a general solution for a simple grammar can be reused in grammars for other specific languages. As an example we show how diagram generation for a very simple state transition language can be integrated in a more complex specific state transition language. We use an extended form of attribute grammars, RAGs, which permits attributes to be references to nodes in the syntax tree. An external graph drawing tool is used to visualize the diagrams. The solution is modularized to support reuse for different languages and exchange of the external drawing tool for different types of visualization. CR Classification: C.3, D.1.5, D.2.6 Key words: program visualization, attribute grammars, object-oriented techniques, language-based environments Kurt Nørmark Elucidative Programming 87 Abstract In this paper we introduce Elucidative Programming as a variant of Literate Programming. Literate Programming represents the idea of organizing a source program in an essay that documents the program understanding. An elucidative program connects textual documentation with the abstractions and the details in the source program. The documentation and the source program are defined in separate files. Using Elucidative Programming, the relations between the documentation and the units of the program are defined without use of containment. The textual documentation is intended to be technical writing about the program. Elucidative Programming allows for documentation of transverse relations among the program constituents. In addition, Elucidative Programming is oriented towards on-line presentation of documentation and programs in a WWW browser. CR Classification: D.1.0, D.2.6, I.7.2 Key words: program documentation, literate programming, programming tools, WWW program presentation Thomas Vestdam Documentation Threads - Presentation of Fragmented Documentation 106 Abstract In this paper, we introduce a means for presenting fragmented documentation through a concept called documentation threads. A documentation thread presents a number of documentation fragments specified by the programmer and can guide a documentation reader through the fragmented documentation of modern object-oriented software. A tool, DocSewer, that support construction of documentation threads has been build and used to experiment with documentation threads. We have found that the documentation thread concept is promising because it can be used to combine independent documentation and documentation written in CASE tools into a meaningful presentation. Furthermore, we have found a potential use of documentation threads in interface documentation. Even though documentation threads have a conceptual problem, our experiments gives us reason to believe that this problem can be eased. CR Classification: D.2.7 Key words: program documentation, documentation threads, documentation patterns Maarit Harsu Identifying Object-Oriented Features from Procedural Software 126 Abstract When re-engineering non-object-oriented (procedural) programs into an object-oriented platform, the first step is to identify object-oriented features from procedural code. We are especially interested in how to find other object-oriented features besides objects. Such are, for example, inheritance and polymorphism. We introduce new methods to identify these features. The translation process into an object-oriented language exploits both re-engineering and language conversion methods. We discuss different approaches in language conversion, dividing language converters into source-driven and target-driven translators. We consider how these approaches are related to the identification of object-oriented features. CR Classification: D.2.7 Key words: re-engineering, language conversion, object identification Lars Bendix, Ulf Asklund and Jonas Persson Summary of the Subworkshop on Change Management for Open Source Software 143 VOLUME 7, NUMBER 3, FALL 2000 SELECTED PAPERS OF THE SEVENTH SCANDINAVIAN WORKSHOP ON ALGORITHM THEORY (SWAT'2000), JULY 5-7, 2000: Magnús M. Halldórsson Guest Editor's Foreword 149 Rasmus Pagh A Trade-Off for Worst-Case Efficient Dictionaries 151 Abstract We consider dynamic dictionaries over the universe U = {0,1}^w on a unit-cost RAM with word size w and a standard instruction set, and present a linear space deterministic dictionary accommodating membership queries in time (log\log n)^(O(1)) and updates in time (log n)^(O(1)), where n is the size of the set stored. Previous solutions either had query time (log n)^(Omega(1)) or update time 2^(omega(sqrt(log n)) in the worst case. CR Classification: F.2.2, H.3.3 Key words: deterministic dynamic dictionary, membership, worst-case, trade-off Anne Berry, Jean-Paul Bordat and Pinar Heggernes Recognizing Weakly Triangulated Graphs by Edge Separability 164 Abstract We apply Lekkerkerker and Boland's recognition algorithm for triangulated graphs to the class of weakly triangulated graphs. This yields a new characterization of weakly triangulated graphs, as well as a new O(m^2) recognition algorithm which, unlike the previous ones, is not based on the notion of a 2-pair, but rather on the structural properties of the minimal separators of the graph. It also gives the strongest relationship to the class of triangulated graphs that has been established so far. CR Classification: G.2.2, F.2.2 Key words: weakly triangulated graphs, graph recognition, graph characterization, minimal separators, triangulated graphs Piotr Berman A d/2 Approximation for Maximum Weight Independent Set in d-Claw Free Graphs 178 Abstract A graph is d-claw free if no node has d distinct independent neighbors. In the most usual applications, the nodes of this graph form a family of sets with fewer than d elements, and the edges indicate overlapping pairs of sets. Consequently, an independent set in the graph is a packing in our family of sets. In this paper we consider the following problem. Given is a d-claw free graph G = (V,E,w) where w: V -> R+. We describe an algorithm with running time polynomial in |V|^d that finds an independent set A such that w(A^*)/w(A) <= d/2, where A^* is an independent that maximizes w(A^*). The previous best polynomial time approximation algorithm obtained w(A^*)/w(A) <= 2d/3. CR Classification: F.2, G.2.2 Key words: approximation algorithms, maximum independent set, node weights Srinivas Doddi, Madhav V. Marathe, S. S. Ravi, David S. Taylor and Peter Widmayer Approximation Algorithms for Clustering to Minimize the Sum of Diameters 185 Abstract We consider the problem of partitioning the n nodes of a complete edge weighted graph into k clusters so as to minimize the sum of the diameters of the clusters. Since the problem is NP-complete, our focus is on the development of good approximation algorithms. When edge weights satisfy the triangle inequality, we present the first approximation algorithm for the problem. The approximation algorithm yields a solution which has no more than O(k) clusters such that the sum of cluster diameters is within a factor O(ln(n/k)) of the optimal value using exactly k clusters. Our approach also permits a tradeoff among the constant terms hidden by the two big-O terms and the running time. For any fixed k, we present an approximation algorithm that produces k clusters whose total diameter is at most twice the optimal value. When the distances are not required to satisfy the triangle inequality, we show that, unless P = NP, for any rho >= 1, there is no polynomial time approximation algorithm that can provide a performance guarantee of rho even when the number of clusters is fixed at 3. We also present some results for the problem of minimizing the sum of cluster radii. CR Classification: G.2.2 Key words: clustering, data mining, approximation algorithms, bicriteria problems, combinatorial algorithms Vincenzo Liberatore Scheduling Jobs before Shut-Down 204 Abstract Distributed systems execute background or alternative jobs while waiting for data or requests to arrive from another processor. In those cases, the following shut-down scheduling problem arises: given a set of jobs of known processing time, schedule them on m machines so as to maximize the total weight of jobs completed before an initially unknown deadline. We present optimally competitive deterministic and randomized algorithms for shut-down scheduling. Our deterministic algorithm is parameterized by the number m of machines. Its competitive ratio increases as the number of machines decreases, but it is optimal for any given choice of m. Such a family of deterministic algorithms can be translated into a family of randomized algorithms that use progressively less randomization and that are optimal for any given number of fair coin tosses. Hence, we establish a precise trade-off between the amount of randomization and the best possible competitive ratio. CR Classification: F.2.2 Key words: algorithms, online algorithms, competitive analysis, scheduling, knapsack Pankaj K. Agarwal, Leonidas J. Guibas, Sariel Har-Peled, Alexander Rabinovitch and Micha Sharir Penetration Depth of Two Convex Polytopes in 3D 227 Abstract Let A and B be two convex polytopes in R^3 with m and n facets, respectively. The penetration depth of A and B, denoted as pi(A,B), is the minimum distance by which A has to be translated so that A and B do not intersect. We present a randomized algorithm that computes pi(A,B) in O(m^(3/4+eps)n^(3/4+eps)+m^(1+eps)+n^(1+eps)) expected time, for any constant eps > 0. It also computes a vector t such that ||t|| = pi(A,B) and int(A + t) intersect B = ?ø. We show that if the Minkowski sum B\oplus(-A) has K facets, then the expected running time of our algorithm is O(K^(1/2+eps) m^1/4 n^1/4 + m^(1+eps) + n^(1+eps)), for any eps > 0. We also present an approximation algorithm for computing pi(A,B). For any delta > 0, we can compute, in time O(m+n+(log^2 (m+n))/delta), a vector t such that ||t|| <= (1+delta) pi(A,B) and int(A+t) intersect B = ?ø . Our result also gives a delta-approximation algorithm for computing the width of A in time O(n + (1/delta)log^2 (1/delta)), which is simpler and faster than the recent algorithm by Chan[3]. CR Classification: F.2.2, G.3, I.3.5, I.2.9 Key words: collision detection, Minkowski sum, geometric optimization Tetsuo Asano, Tomomi Matsui and Takeshi Tokuyama Optimal Roundings of Sequences and Matrices 241 Abstract In this paper, we discuss the problem of computing an optimal rounding of a real sequence (resp. matrix) into an integral sequence (resp. matrix). Our criterion of the optimality is to minimize the weighted l_{\infty}-distance Dist_{\infty}^(F,w) (A,B) between an input sequence (resp. matrix) A and the output B. The distance is dependent on a family F of intervals (resp. rectangular regions) for the sequence rounding (resp. matrix rounding) and positive-valued weight function w on the family. We give efficient polynomial-time algorithms for the sequence-rounding problem for weighted l_{\infty}-distance with respect to any weight function w and any family F of intervals. For the matrix-rounding problem, we prove that it is NP-hard to compute an approximate solution with approximation ratio smaller than 2 with respect to the unweighted l_{\infty}-distance associated with the family W_2 of all 2 x 2 square regions. CR Classification: F.2, I.4 Key words: algorithms, rounding, halftoning, sequence VOLUME 7, NUMBER 4, WINTER 2000 SELECTED PAPERS OF THE EUROPEAN SYMPOSIUM ON PROGRAMMING (ESOP'2000), MARCH 25 -- APRIL 2, 2000: Gert Smolka Guest Editor's Foreword 257 Martin Hofmann A Type System for Bounded Space and Functional In-Place Update 258 Abstract We show how linear typing can be used to obtain functional programs which modify heap-allocated data structures in place. We present this both as a ``design pattern'' for writing C-code in a functional style and as a compilation process from linearly typed first-order functional programs into malloc()-free C code. The main technical result is the correctness of this compilation. The crucial innovation over previous linear typing schemes consists of the introduction of a resource type Diamond which controls the number of constructor symbols such as cons in recursive definitions and ensures linear space while restricting expressive power surprisingly little. While the space efficiency brought about by the new typing scheme and the compilation into C can also be realised by with state-of-the-art optimising compilers for functional languages such as Ocaml [16] , the present method provides guaranteed bounds on heap space which will be of use for applications such as languages for embedded systems or automatic certification of resource bounds. We show that the functions expressible in the system are precisely those computable on a linearly space-bounded Turing machine with an unbounded stack. By a result of Cook this equals the complexity class 'exponential time'. A tail recursive fragment of the language captures the complexity class 'linear space'. We discuss various extensions, in particular an extension with FIFO queues admitting constant time catenation and enqueuing, and an extension of the type system to fully-fledged intuitionistic linear logic. CR Classification: D.1.1, D.3.3, F.3.2, I.2.2 Key words: C, compilers, computational complexity, functional programming, garbage collection, memory organization Laurent Mauborgne An Incremental Unique Representation for Regular Trees 290 Abstract In order to deal with infinite regular trees (or other pointed graph structures) efficiently, we give new algorithms to store such structures. The trees are stored in such a way that their representation is unique and shares substructures as much as possible. This maximal sharing allows substantial memory gain and speed up over previous techniques. For example, equality testing becomes constant time (instead of O(n\log(n)). The algorithms are incremental, and as such allow good reactive behavior. These new algorithms are then applied in a representation of sets of trees. The expressive power of this new representation is exactly what is needed by the original set-based analyses of Heintze and Jaffar [1990], or Heintze [1994]. CR Classification: D.1, E.1, G.2.2, F.2 Key words: infinite trees, sharing, tree skeletons, cartesian approximation François Pottier A Versatile Constraint-Based Type Inference System 312 Abstract The combination of subtyping, conditional constraints and rows yields a powerful constraint-based type inference system. We illustrate this claim by propo sing solutions to three delicate type inference problems: ``accurate'' pattern matchings, record concatenation, and first-class messages. Previously known solutions involved a different technique in each case; our theoretical contribut ion is in using only a single set of tools. On the practical side, this allows all three problems to benefit from a c ommon set of constraint simplification techniques, a formal description of which is given in an appendix. CR Classification: F.3.3 Key words: constraint-based type inference, subtyping, rows, conditional constraints Claudio V. Russo First-Class Structures for Standard ML 348 Abstract Standard ML is a statically typed programming language that is suited for the construction of both small and large programs. ``Programming in the small'' is captured by Standard ML's Core language. ``Programming in the large'' is captured by Standard ML's Modules language that provides constructs for organising related Core language definitions into self-contained modules with descriptive interfaces. While the Core is used to express details of algorithms and data structures, Modules is used to express the overall architecture of a software system. The Modules and Core languages are stratified in the sense that modules may not be manipulated as ordinary values of the Core. This is a limitation, since it means that the architecture of a program cannot be reconfigured according to run-time demands. We propose a novel and practical extension of the language that allows modules to be manipulated as first-class values of the Core language. CR Classification: D.3 Key words: existential types, first-class modules, Standard ML, type theory Helmut Seidl and Bernhard Steffen Constraint-Based Inter-Procedural Analysis of Parallel Programs 375 Abstract We provide a uniform framework for the analysis of programs with procedures and explicit, unbounded, fork/join parallelism covering not only bitvector problems like reaching definitions or live variables but also non-bitvector problems like strong copy constant propagation. Due to their structural similarity to the sequential case, the resulting algorithms are as efficient as their widely accepted sequential counterparts, and they can easily be integrated in existing program analysis environments like e.g. MetaFrame or PAG. We are therefore convinced that our method will soon find its way into industrial-scale computer systems. CR Classification: F.3.2, D.1.3, D.2.5, F.1.1 Key words: inter-procedural program analysis, explicit parallelism, bitvector problems, strong copy constant propagation, coincidence theorems Author Index Volume 7 (2000) 401 VOLUME 6, NUMBER 1, SPRING 1999 SELECTED PAPERS OF THE EIGHTH NORDIC WORKSHOP ON PROGRAMMING ENVIRONMENT RESEARCH (NWPER'98), AUGUST 21-23, 1998: Khalid A. Mughal and Andreas L. Opdahl Guest Editors' Foreword: Programming and Software Development Environment Research in the Nordic Countries 1 Henrik Bærbak Christensen The Ragnarok Software Development Environment 4 Abstract Ragnarok is an experimental software development environment that focuses on enhanced support for managerial activities in large scale software development taking the daily work of the software developer as its point of departure. The main emphasis is support in three areas: management, navigation, and collaboration. The leitmotif is the software architecture, which is extended to handle managerial data in addition to source code; this extended software architecture is put under tight version- and configuration management control and furthermore used as basis for visualisation. Preliminary results of using the Ragnarok prototype in a number of projects are outlined. CR Classification: D.2.2, D.2.9, D.2.1, H.5.2, K.6.3, Key words: software architecture, software development environment, software configuration management, software visualisation Kåre Kjelstrøm and Peter Petersen A CASE Tool for COM Development 22 Abstract This article presents a generic graphical tool-kit for developing Microsoft Component Object Model (COM) software components. A method for generating code for arbitrary languages is presented, along with a description of how the tool could be integrated with third party applications. A short review of a concrete implementation concludes the paper. CR Classification: D.2.2, D.1.5 Key words: object orientation, OLE, COM, component model, Computer Aided Software Engineering, CASE Elizabeth Bjarnason, Görel Hedin, and Klas Nilsson Interactive Language Development for Embedded Systems 36 Abstract Domain-specific languages form programming support leading to simpler and safer programs, which is highly desirable for programming embedded systems and manufacturing equipment. However, language design is non-trivial, and the required development effort is often too costly for industrial projects. To cope with this problem, we have implemented a highly interactive tool, APPLAB, for developing and experimenting with domain-specific languages, and used it in the development of a programming environment for industrial robots. APPLAB supports the development of domain-specific languages by providing a high-level specification formalism (reference attributed grammars), and by providing integrated language-based support for editing syntax, semantics and the application program. The tool was connected to an industrial robot and confronted with programming topics encountered in typical industrial applications. Experiences from our full-scale prototype, including the development of a robot programming language, shows that interactive language development is very beneficial for programming of embedded systems. CR Classification: C.3, D.1.5, D.2.6, D.3.4 Key words: domain specific languages, interactive tools, attribute grammars, embedded systems, industrial robots Henrik Røn An Overview of a Dynamic Programming Environment Based on Extensibility 55 Abstract This paper presents an overview of Midg?ård, a program development environment, where part of the development of a program is carried out in the context of an execution of the program itself. Midg?ård enables the developer to experiment with different designs and solutions while maintaining a low turn-around time during most of the development. Midg?ård thus provides better support for exploratory programming than traditional batch-oriented environments for statically checked languages. The design of Midg?ård, which is based on dynamic extensibility, is described and a prototype implementation for the block structured object-oriented programming language BETA is presented. In the prototype the dynamic extensibility is implemented using an embedded compiler and an embedded incremental linker. The main contribution is the use of an embedded compiler and an embedded incremental linker for enabling that part of the development of a program is done in the context of an execution of the program itself with low turn-around time. CR Classification: D.2.2, D.2.6, D.3.4 Key words: exploratory programming, dynamic extensibility, dynamic and incremental linking, dynamic compilation, runtime programming, statically checked Erik Ernst Dynamic Inheritance in a Statically Typed Language 72 Abstract There is a long-standing schism in object-orientation between the safe but rigid statically typed languages, and the flexible but less safe dynamically typed languages. This paper presents an enhancement of the flexibility and expressive power of a statically typed language, based on an inheritance mechanism which allows both compile-time and run-time construction of classes, as well as dynamic specialization of objects. This enhanced flexibility supports better separation of concerns in large systems and allows for more elegant and maintainable designs. Submethoding---inheritance applied to behavioral descriptors---has been used for the construction of control structures for many years, in BETA. With dynamic inheritance, submethoding becomes more expressive, supporting control structures as first class values which may be constructed and combined dynamically. Even though the concept of submethoding is missing from most other languages, the basic idea could be applied to any statically typed object-oriented language. CR Classification: D.3.3 Key words: object orientation, dynamic inheritance, static analysis, first-class control structures, BETA, gbeta Maarit Harsu Translation of Conditional Compilation 93 Abstract This paper describes how to translate the compiler directives for conditional compilation in automated source-to-source translation between high-level programming languages. The directives for conditional compilation of the source language are translated into the corresponding directives of the target language, and the source program text of each branch of conditional compilation is translated into the corresponding target program text. Such translation raises a problem in conventional parsing because the source text is not a continuous stream of tokens of a legal program but rather a sequence of fragments that must be combined in certain ways to obtain one of several possible alternative programs. As a solution to this problem, a parsing technique is introduced which is able to cope with such input if certain reasonable conditions are satisfied. CR Classification: D.3.4 Key words: source-to-source translation, language conversion, conditional compilation, parsing Kurt Nørmark, Lars Iversen, and Per Madsen Animation and Presentation Tools for Object-Oriented Design 110 Abstract Modelling the dynamic aspects of an object-oriented design is important in order to gain concrete insight into the ways objects interact and relate to each other. It may be possible to obtain a similar insight from a static model, but at a more abstract level and in a more indirect way. Dynamic models are usually presented as graph-based diagrams in which the vertices are objects and the edges are messages or relations among objects. In this paper we study such diagrams as the basis for tools in a dynamic modelling environment. We are especially interested in different ways to present time in such tools. One particular approach is to present `time by time'. This leads to animation of dynamic models. By introducing the idea of `design by animation' we aim at a radical improvement of the presentation and manipulation of dynamic models in contemporary CASE tools. CR Classification: D.2.2, D.2.6, D.1.5 Key words: object-oriented design, dynamic modelling, object graphs, interaction diagrams, design by animation Lars Bendix and Ulf Asklund Summary of the Subworkshop on Change Management 129 VOLUME 6, NUMBER 3, FALL 1999 SELECTED PAPERS OF THE TENTH NORDIC WORKSHOP ON PROGRAMMING THEORY (NWPT'98), OCTOBER 14-16, 1998: Kaisa Sere Guest Editor's Foreword 213 Paolo Ciancarini, Andrea Omicini and Franco Zambonelli Coordination Technologies for Internet Agents 215 Abstract Since its birth the Internet has always been characterised by a twofold aspect of a distributed information repository, to store, publish, and retrieve program and data files, and an interaction medium, including a variety of communication services. An important current trend consists of merging the two aspects described above, and envision the Internet as a globally distributed computing platform where communication and computation can be freely intertwined. However, traditional distributed programming models fall short in this context, due to the peculiar characteristics of the Internet. On the one hand, Internet services are decentralised and unreliable. On the other hand, even more important, mobility, either of users, devices, or application components, is going to impact the Internet in the near future. Since Internet applications are intrinsically interactive and collaborative, the definition of an appropriate coordination model and its integration in forthcoming Internet programming languages are key issues to build applications including mobile entities. We sketch the main features that such a model should present. Then, we survey and discuss some coordination models for Internet programming languages, eventually outlining open issues and promising research directions. CR Classification: D.1.3, D.2.10, C.2.4, F.1.2, I.2.11 Key words: coordination models, Internet agents, multi-agent systems, mobile agents Magne Haveraaen, Helmer André Friis and Tor Arne Johansen Formal Software Engineering for Computational Modelling 241 Abstract Software itself may be considered a formal structure and may be subject to mathematical analysis. This leads to a discipline of formal software engineering (which is not necessarily the same as the use of formal methods in software engineering), where a formal understanding of what software components are and how they may interact is used to engineer both the components themselves and their organisation. A strategy is using the concepts that are suited for organising the problem domain itself to organise the software as well. In this paper we apply this idea in the development of computational modelling software, in particular in the development of a family of related programs for simulation of elastic wave propagation in earth materials. We also discuss some data on the technique's effectiveness. CR Classification: C.0, D.1.5, D.2.1, D.2.11, D.2.13, G.4, I.0 Key words: algebraic software methodologies, coordinate free numerics, numerical software, seismic simulation, domain specific languages, software architecture, software process model, software life cycle model Kim G. Larsen, Justin Pearson, Carsten Weise and Wang Yi Clock Difference Diagrams 271 Abstract In this paper, we present Clock Difference Diagrams (CDD), a new BDD-like data-structure for effective representation and manipulation of certain non-convex subsets of the Euclidean space, notably those encountered in verification of timed automata. It is shown that all set-theoretic operations including inclusion checking may be carried out efficiently on Clock Difference Diagrams. Other clock operations needed for fully symbolic analysis of timed automata e.g. future- and reset-operations, can be obtained based on a so-called tight normalform for CDD. A version of the real-time verification tool UPPAAL using Clock Difference Diagrams as the main data-structure has been implemented. Experimental results demonstrate significant space-savings: for nine industrial examples the savings are in average 42% with moderate increase in runtime. CR Classification: D.2.1, D.2.2, D.2.4, I.2.2, I.6.4, F.3.1 Key words: automatic verification, real-time systems, timed automata, symbolic model-checking, clock difference diagrams Carl Johan Lillieroth and Satnam Singh Formal Verification of FPGA Cores 299 Abstract A formal verification technique is presented which is suitable for the verification of standard components (cores) for programmable hardware (FPGAs). This technique can be viewed as a suitable complement to traditional techniques, such as simulation. We describe the modelling and verification of combinational and sequential components using the formal verification tool NP-Tools, and report successful verification of real cores. CR Classification: B.6.3, D.2.4 Key words: formal verification, cores, reconfigurable cores, core development methodology, intellectual property, FPGAs, equivalence checking, St?ålmarck's Method Elena A. Troubitsyna Reliability Assessment through Probabilistic Refinement 320 Abstract The paper presents an approach to reasoning about probabilistic systems that are to be implemented in a parallel or distributed manner. The approach allows us to obtain a quantitative assessment of the reliability of a system under construction. We base our reasoning on the application of refinement and in particularly data refinement techniques to the specification of systems containing a level of overall system reliability. Performing refinement of such a probabilistic specification we not only ensure the correctness of the system design but also establish a quantitative link between the overall system reliability and reliabilities of components from which the system is to be implemented. We illustrate the approach by designing a system that models data transmission over an unreliable medium. CR Classification: D.2.4, G.3 Key words: probability, reliability, action system, refinement Tarmo Uustalu and Varmo Vene Mendler-Style Inductive Types, Categorically 343 Abstract We present a basis for a category-theoretic account of Mendler-style inductive types. The account is based on suitably defined concepts of Mendler-style algebra and algebra homomorphism; Mendler-style inductive types are identified with initial Mendler-style algebras. We use the identification to obtain a reduction of conventional inductive types to Mendler-style inductive types and a reduction in the presence of certain restricted existential types of Mendler-style inductive types to conventional inductive types. CR Classification: D.1.1, D.3.3, F.3.3, F.4.1 Key words: inductive types, initial algebras, prefixed points, Mendler-style, typed functional programming, categorical program calculation VOLUME 6 NUMBER 4 WINTER 1999 Min-Shiang Hwang A Dynamic Key Generation Scheme for Access Control in a Hierarchy 363 Abstract In this paper, we propose a new dynamic cryptographic key generation scheme for access control in a hierarchy. Our method can achieve the following three goals. First, the storage space needed to store public information is smaller than that required in previous work. Second, when a security class is added to the hierarchy, we assign a secret key and a public derivation key to the security class without affecting the keys of the other security classes in the hierarchy. Third, when a security class is deleted from the hierarchy, we simply erase the keys of that security class in the hierarchy and change the derivation key of its immediate ancestor. CR Classification: D.4.6 Key words: access control, cryptography, data security, multilevel Sofiene Tahar and Paul Curzon Comparing HOL and MDG: a Case Study on the Verification of an ATM Switch Fabric 372 Abstract Interactive formal proof and automated verification based on decision graphs are two contrasting formal hardware verification techniques. In this paper, we compare these two approaches. In particular, we consider HOL and MDG. The former is an interactive theorem-proving system based on higher-order logic, while the latter is an automatic system based on Multiway Decision Graphs. As the basis for our comparison we have used both systems to independently verify a fabricated ATM communications chip, the Fairisle 4 by 4 switch fabric. CR Classification: B.4.4, F.4.1 Key words: hardware verification, formal methods, tools comparison, theorem proving, equivalence checking, property checking, HOL, MDG, ATM Jesper Larsen and Ib Pedersen Experiments with the Auction Algorithm for the Shortest Path Problem 403 Abstract The auction approach for the shortest path problem (SPP) as introduced by Bertsekas is tested experimentally. Parallel algorithms using the auction approach are developed and tested. Both the sequential and parallel auction algorithms perform significantly worse than a state-of-the-art Dijkstra-like reference algorithm. Experiments are run on a distributed-memory MIMD class Meiko parallel computer. CR Classification: G.2.2, G.4 Key words: shortest path, auction, parallel computing, performance results Maria Andreou and Stavros D. Nikolopoulos NC Coloring Algorithms for Permutation Graphs 422 Abstract We show that the problem of coloring a permutation graph of size n can be solved in O(log n log k) time using O(kn^2 / log k log^{2}n) processors on the CREW PRAM model of computation, where 1 < k < n. We estimate the parameter k on random permutation graphs and show that the coloring problem can be solved in O(log n \log \log n) time in the average-case on the CREW PRAM model of computation with O(n^2) processors. Our computational strategy goes as follows: Given a permutation \pi or its corresponding permutation graph G[\pi], we first construct a directed acyclic graph G^*[\pi] using certain combinatorial properties of \pi, and then compute longest paths in the directed acyclic graph using divide-and-conquer techniques. We show that the problem of coloring a permutation graph G[\pi] is equivalent to finding longest paths in its acyclic digraph G^*[\pi]. The best-known parallel algorithms for the same problem run in O(log^2n) time using O(n^3 / log n) processors on the CREW PRAM model of computation. CR Classification: F.2.2, G.2.2 Key words: parallel algorithms, perfect graphs, permutation graphs, coloring problem, directed acyclic graphs, longest paths Drago Krznaric, Christos Levcopoulos and Bengt J. Nilsson Minimum Spanning Trees in d Dimensions 446 Abstract It is shown that a minimum spanning tree of n points in R^d under any fixed L_p-metric, with p=1,2,...\infty, can be computed in optimal O( T_d(n,n) ) time in the algebraic computational tree model. T_d(n,m) denotes the time to find a bichromatic closest pair between n red points and m blue points. The previous bound in the model was O( T_d(n,n) log n ) and it was proved only for the L_2 (Euclidean) metric. Furthermore, for d = 3 it is shown that a minimum spanning tree can be found in O(n log n) time under the L_1 and L_\infty-metrics. This is optimal in the algebraic computation tree model. The previous bound was O(n log n log log n). CR Classification: F.2.2, G.2.2 Key words: algorithms, computational geometry, networks, trees Olivier Devillers and Asish Mukhopadhyay Finding an Ordinary Conic and an Ordinary Hyperplane 462 Abstract Given a finite set of non-collinear points in the plane, there exists a line that passes through exactly two points. Such a line is called an ordinary line. An efficient algorithm for computing such a line was proposed by Mukhopadhyay et al. (1997) In this note we extend this result in two directions. We first show how to use this algorithm to compute an ordinary conic, that is, a conic passing through exactly five points, assuming that all the points do not lie on the same conic. Both our proofs of existence and the consequent algorithms are simpler than previous ones. We next show how to compute an ordinary hyperplane in three and higher dimensions. CR Classification: F.2.2, I.3.5 Key words: Computational Geometry, degenerate configurations, ordinary line, ordinary conic, ordinary hyperplane Joachim Gudmundsson and Christos Levcopoulos A Fast Approximation Algorithm for TSP with Neighborhoods 469 Abstract In TSP with neighborhoods (TSPN) we are given a collection X of k polygonal regions in the plane, called neighborhoods, with totally n vertices, and we seek the shortest tour that visits each neighborhood. In this paper we present a simple and fast algorithm that, given a start point, computes a TSPN tour of length O(log k) times the optimum in time O(n+k log k). When no start point is given we show how to compute a "good" start point in time O(n^2 log n), hence we obtain a logarithmic approximation algorithm that runs in time O(n^2 log n). We also present an algorithm which performs at least one of the following two tasks (which of these tasks is performed depends on the given input): (1) It outputs in time O(n log n) a TSPN tour of length O(log k) times the optimum. (2) It outputs a TSPN tour of length less than (1+epsilon) times the optimum in time O(n^3), where epsilon is an arbitrary real constant given as an optional parameter. The same approach can be used for the Red-Blue Separation Problem. We show an algorithm with logarithmic approximation ratio that runs in time O(n log n), where n is the total number of points. CR Classification: F.2.2, I.3.5 Key words: computational geometry, approximation algorithms, red-blue separation, TSP with neighborhoods Author Index Volume 6 (1999) 489 VOLUME 5 NUMBER 1 SPRING 1998 D. Krznaric and C. Levcopoulos Computing a Threaded Quadtree from the Delaunay Triangulation in linear time 1 Abstract The quadtree in this paper is a variant in which two nodes are threaded (linked) if they represent equal-sized squares that are sufficiently close to each other. Given the Delaunay triangulation, it is shown that the threaded quadtree can be computed in linear time and space. It is also described how the threaded quadtree can be used to answer certain types of non-trivial range queries in constant time per query, which has applications, for example, in computing the greedy triangulation, the complete linkage clustering, and a constant-factor approximation of minimum weight triangulation. These queries ask for links to and information concerning the contents of growing ranges, including values of functions based on these contents. CR Classification: F.2.2, E.1, I.4.10 Key words: analysis of algorithms, computational geometry, quadtree, Delaunay triangulation O.-J. Dahl, O. Owe and T. J. Bastiansen Subtyping and Constructive Specification 19 Abstract The use of subtyping is explored in the context of a style of constructive specification. The intention is to enhance expressiveness and strengthen syntactic controls. The resulting specification style is part of a wide spectrum language, ABEL, developed at the Oslo University. A concept of signature completeness is presented, as well as a completion algorithm. Some results relating to weak, strong, and optimal typing are proved. CR Classification: D.1.1, D.2.1, D.2.4, D.3.3, F.3.1, F.3.3 Key words: algebraic specification, specification language, generator induction, subtypes, partial functions, definedness, syntactic control B. Kristoffersen and O.-J. Dahl On Introducing Higher Order Functions in ABEL 50 Abstract We discuss how the 1'st order specification and programming language ABEL could be extended with higher order functions. Several issues arise, related to subtyping, parameterization, strictness of generators and defined functions, and to the choice between lambda expressions and currying. The paper can be regarded as an exercise in language design: how to introduce higher order functions under the restrictions enforced by (1'st order) ABEL. A technical result is a soundness proof for covariant subtype replacement, useful when implementing data types under volume constraints imposed by computer hardware. CR Classification: D.1.1, D.2.1, D.2.4, D.3.3, F.3.1, F.3.3 Key words: algebraic specification, higher order functions, term rewriting, generator induction, parameterized modules, subtypes, strictness R. A. Baeza-Yates and H. Soza-Pollman Analysis of Linear Hashing Revisited 70 Abstract In this paper we characterize several expansion techniques used for linear hashing and we present how to analyze any linear hashing technique that expands based on local events or that mixes local events and global conditions. As an example we give a very simple randomized expansion technique, which is easy to analyze and implement. Furthermore, we obtain the analysis of the original hashing technique devised by Litwin, which was unsolved until now, comparing it to the later and more widely used version of Larson's. We also analyze one hybrid technique. Among other results, it is shown that the control function used by Litwin does not produce a good storage utilization, matching known experimental data. CR Classification: F.2.2, E.5, E.2 Key words: external hashing, linear hashing, analysis of algorithms, optimal bucketing VOLUME 5 NUMBER 2 SUMMER 1998 S. Dobrev and P. Ruzicka On the Communication Complexity of Strong Time-Optimal Distributed Algorithms 87 Abstract In this paper we introduce the notion of time optimality in the strong sense and we investigate the communication complexity of strong time-optimal distributed algorithms. We show that a strong time-optimal algorithm solving the MST problem on networks with n nodes and m links interchanges at least (m-n)^2 messages. We also present an Theta(m^2) bound on communication complexity for strong time-optimal algorithms solving the gossip problem. As a consequence, similar results can be shown for other interesting problems on arbitrary networks, such as electing a leader, determining the median and the center, computing the diameter and others. CR Classification: C.2.1, G.2.2, C.2.2, F.1.3, F.2.3 Key words: distributed algorithms, time and communication complexity, asynchronous networks P. Høyer and K. S. Larsen Parametric Permutation Routing via Matchings 105 Abstract The problem of routing permutations on graphs via matchings is considered, and we present a general algorithm which can be parameterized by different heuristics. This leads to a framework which makes the analysis simple and local. CR Classification: F.1.2, F.2.2 Key words: permutation routing, matching, graph algorithm J. L. Ganley and J. S. Salowe The Power-p Steiner Tree Problem 115 Abstract This paper introduces the power-p Steiner tree problem, which is to find a geometric Steiner tree that minimizes the sum of the edge lengths each raised to the p power. A number of results are presented on computing optimal and approximate power-p Steiner trees. Specifically, a linear-time numerical algorithm is presented for computing optimal Euclidean power-2 Steiner trees with respect to a given topology, and the algorithm is proved to be numerically stable. It is conjectured, and strong evidence given, that the power-p Steiner tree problem is not finitely solvable for p >= 5. Finally, bounds are given on the power-p Steiner ratio, which measures the quality of a minimum spanning tree as an approximation of an optimal power-p Steiner tree. CR Classification: F.2, G.2, G.2 Key words: Steiner tree, geometric algorithms, approximation, facility location P. Heggernes and J. A. Telle Partitioning Graphs Into GeneralizedxDominating Sets 128 Abstract We study the computational complexity of partitioning the vertices of a graph into generalized dominating sets. Generalized dominating sets are parameterized by two sets of nonnegative integers sigma and rho which constrain the neighborhood N(v) of vertices. A set S of vertices of a graph is said to be a (sigma, rho)-set if forall v in S: |N(v) intersect S| in sigma and forall v not in S: |N(v) intersect S| in rho. The (k, sigma, rho)-partition problem asks for the existence of a partition V_1,V_2,...,V_k of vertices of a given graph G such that V_i, i=1,2,...,k is a (sigma, rho)-set of G. We study the computational complexity of this problem as the parameters sigma, rho and k vary. CR Classification: F.2.2, G.2.2 Key words: graph partitioning, NP-completeness, complexity, domination M. Johnsson, G. Magyar and O. Nevalainen On the Euclidean 3-Matching Problem 143 Abstract In the 3-matching problem (3MP) we want to partition a set of n=3l points into l disjoint subsets, each consisting of three points (called triplets), and to minimize the total cost of the triplets. The problem is related to the well-known weighted 2-matching, 3-dimensional assignment and p-median problems. We consider a Euclidean 3MP (E3MP) where the cost cijk of a triplet (i,j,k) is the sum of the lengths of the two shortest edges of the triangle (i,j,k). The problem has several related applications in the optimization of manufacturing operations. We show that the general 3MP is NP-complete and give several methods to calculate lower bounds for the E3MP. The exact solution by branch-and-bound technique is possible for small instances. For large problem instances we have to apply approximative solution algorithms. Local search-based heuristics augmented with ideas from tabu-search, simulated annealing and genetic algorithms worked well in our tests. CR Classification: F.2.2, G.1.6, G.2.1, G.2.2 Key words: matching problems, lower bounds, Lagrangian relaxation, branch-and-bound, heuristic methods, simulated annealing, genetic algorithms VOLUME 5 NUMBER 3 FALL 1998 Jan Kratochvíl, Andrzej Proskurowski and Jan Arne Telle Complexity of Graph Covering Problems 173 Abstract For a fixed graph H, the H-cover problem asks whether an input graph G allows a degree preserving mapping f:V(G) -> V(H) such that for every v in V(G), f(N_G(v))=N_H(f(v)). In this paper we design efficient algorithms for certain graph covering problems according to two basic techniques. The first is based in part on a reduction to the 2-SAT problem. The second technique exploits necessary and sufficient conditions for the partition of a graph into 1-factors and 2-factors. For other infinite classes of graph covering problems we derive NP-completeness results by reductions from graph coloring problems. We illustrate this methodology by classifying the complexity of all H-cover problems defined by simple graphs H with at most 6 vertices. CR Classification: F.2.2, G.2.2 Key words: local isomorphism, algorithms, complexity Teofilo F. Gonzalez Bounded Fan-Out Multimessage Multicasting 196 Abstract We consider Multimessage Multicasting over the n processor complete (or fully connected) static network (MM_C). We present a fast approximation algorithm with an improved approximation bound for problem instances with small fan-out (maximum number of processors receiving any given message), but arbitrary degree d, where d is the maximum number of messages that each processor may send or receive. These problem instances are the ones that arise in practice, since the fan-out restriction is imposed by the applications and the number of processors available in commercial systems. Our algorithm is centralized and requires all the communication information ahead of time. Applications where this information is available include iterative algorithms for solving linear equations and most dynamic programming procedures. The Meiko CS-2 machine as well as other computer systems whose processors communicate via dynamic networks will also benefit from our results at the expense of doubling the number of communication phases. CR Classification: F.2.2, C.1.4, G.2.2, C.2.1, G.1.3 Key words: approximation algorithms, multimessage multicasting, dynamic networks, parallel iterative methods, generalized edge coloring Christian Mossin Higher-Order Value Flow Graphs 214 Abstract The concepts of value and control flow graphs are important for program analysis of imperative programs. An imperative value flow graph can be constructed by a single pass over the program text. No similar concepts exist for higher-order languages: we propose a method for constructing value flow graphs for typed higher-order functional languages. A higher-order value flow graph is constructed by a single pass over an explicitly typed program. By using standard methods, single-source and single-use value flow problems can be answered in linear time and all-sources-all-uses can be answered in quadratic time (in the size of the flow graph, which is equivalent to the size of the explicitly typed program). On simply typed programs, the precision of the resulting analysis is equivalent to closure analysis. In practice, it is a reasonable assumption that typed programs are only bigger than their untyped equivalent by a constant factor, hence this is an asymptotic improvement over previous algorithms. We extend the analysis to handle polymorphism, sum types and recursive types. As a consequence, the analysis can handle (explicit) dynamically typed programs. The analysis is polyvariant for polymorphic definitions. CR Classification: D.3.4, F.3.3, F.4.1 Key words: program analysis, type system, efficiency, polymorphism, recursive types, polyvariance Jukka Paakki and Antti-Pekka Tuovinen Source-to-Source Translation of Visual Languages 235 Abstract In this paper we study the problem of translation between visual languages. Languages with a visual syntax are currently in wide use, due to the popularity of different kinds of diagrams used especially in software analysis, design, and animation. Unlike conventional textual languages, visual languages are still quite immature with regard to the formalisms for their specification and automatic implementation. As a consequence, the current diagram editors are usually implemented with ad hoc solutions on a weak methodological foundation. This makes it hard to develop sophisticated diagram manipulators, such as meaning-preserving transformators between two different styles of diagrams. To overcome this problem, we present a solid method for the transformation between diagrams, or more generally, for the source-to-source translation between two visual languages. The method is based on a mapping between atomic relational grammars for the two languages, and on considering translation as a parse tree transformation process. We also present the underlying technique of producing a parse tree for a visual language specified with an atomic relational grammar. CR Classification: D.3.4, F.4.2, D.1.7 Key words: grammatical modeling, diagram transformation, visual language parsing, visual language translation VOLUME 5 NUMBER 4 WINTER 1998 SELECTED PAPERS OF THE EUROPEAN SYMPOSIUM ON PROGRAMMING (ESOP'98), APRIL 2-3, 1998: Chris Hankin Guest Editor's Foreword 265 Martín Abadi and Andrew D. Gordon A Bisimulation Method for Cryptographic Protocols 267 Abstract We introduce a definition of bisimulation for cryptographic protocols. The definition includes a simple and precise model of the knowledge of the environment with which a protocol interacts. Bisimulation is the basis of an effective proof technique, which yields proofs of classical security properties of protocols and also justifies certain protocol optimizations. The setting for our work is the spi calculus, an extension of the pi calculus with cryptographic primitives. We prove the soundness of the bisimulation proof technique within the spi calculus. CR Classification: C.2.2, D.1.3, E.3, F.3.1 Key words: cryptographic protocols, verification, bisimulation, process calculus Christian Fecht and Helmut Seidl Propagating Differences: an Efficient New Fixpoint Algorithm for Distributive Constraint Systems 304 Abstract Integrating semi-naive fixpoint iteration from deductive data bases [3,2,4] (Fran?çois Bancilhon and Raghu Ramakrishnan: An amateur's introduction to recursive query processing strategies. In ACM SIGMOD Conference, 1986, 16-52; Isaac Balbin and Kotagiri Ramamohanarao: A generalization of the differential approach to recursive query evaluation. Journal of Logic Programming (JLP) 4, 3, 1987, 259-262; Fran?çois Bancilhon and Raghu Ramakrishnan: Performance evaluation of data intensive logic programs. In Foundations of Deductive Databases.) as well as continuations into worklist-based solvers, we derive a new application independent local fixpoint algorithm for distributive constraint systems. Seemingly different efficient algorithms for abstract interpretation like those for linear constant propagation for imperative languages [16] (Susan Horwitz, Thomas W. Reps, and Mooly Sagiv: Precise interprocedural dataflow analysis with applications to constant propagation. In Proceedings of 6th International Conference on Theory and Practice of Software Development (TAPSOFT), Volume 915 of LNCS, Springer-Verlag, 1995, 651-665.) as well as for control-flow analysis for functional languages [12] (Nevin Heintze: Set-based analysis of ML programs. In ACM Conference on Lisp and Functional Programming (LFP), 1994, 306-317.) turn out to be instances of our scheme. Besides this systematizing contribution we also derive a new efficient algorithm for abstract OLDT-resolution as considered in [14,15,24] (Pascal Van Hentenryck, Olivier Degimbe, Baudouin Le Charlier, and Laurent Michel: Abstract Interpretation of Prolog based on OLDT resolution. Technical Report CS-93-05, Brown University, Providence, RI 02912, 1993; Pascal Van Hentenryck, Olivier Degimbe, Baudouin Le Charlier, and Laurent Michel: The impact of granularity in abstract interpretation of Prolog. In Proceedings of Static Analysis, 3rd International Workshop (WSA), Volume 724 of LNCS, Springer-Verlag, 1993, 1-14; Helmut Seidl and Christian Fecht: Interprocedural analysis based on PDAs. Technical Report 97-06, University Trier, 1997, Extended Abstract in: Verification, Model Checking and Abstract Interpretation A Workshop in Association with ILPS'97. Long version to appear in Journal of Logic Programming (JLP).) for Prolog. CR Classification: D.3.4, F.3.1, F.2.2, H.3.3 Key words: fixpoint computation, semi-naive iteration, program analysis K. Rustan M. Leino Recursive Object Types in a Logic of Object-Oriented Programs 330 Abstract This paper formalizes a small object-oriented programming notation. The notation features imperative commands where objects can be shared (aliased), and is rich enough to allow subtypes and recursive object types. The syntax, type checking rules, axiomatic semantics, and operational semantics of the notation are given. A soundness theorem showing the consistency between the axiomatic and operational semantics is stated and proved. A simple corollary of the soundness theorem demonstrates the soundness of the type system. Because of the way types, fields, and methods are declared, no extra effort is required to handle recursive object types. CR Classification: D., D.3, D.3.1 Key words: programming logic, object-oriented programs, recursive object types, program correctness, axiomatic semantics, soundness John L. Ross and Mooly Sagiv Building a Bridge between Pointer Aliases and Program Dependences 361 Abstract In this paper we present a surprisingly simple reduction of the program dependence problem to the may-alias problem. While both problems are undecidable, providing a reduction between them has great practical importance. Program dependence information is used extensively in compiler optimizations, automatic program parallelizations, code scheduling for super-scale machines, and software engineering tools such as code slicers. When working with languages that support pointers and references, these systems are forced to make very conservative assumptions. This leads to many superfluous program dependences and limits compiler performance and the usability of software engineering tools. Fortunately, there are many algorithms for computing conservative approximations to the may-alias problem. The reduction has the important property of always computing conservative program dependences when used with a conservative may-alias algorithm. We believe that the simplicity of the reduction and the fact that it takes linear time may make it practical for realistic applications. CR Classification: D.3.3, D.3.4 Key words: alias analysis, dataflow analysis, pointer analysis, program dependences, static analysis Author Index Volume 5 (1998) 387 VOLUME 4 NUMBER 1 SPRING 1997 THEME ISSUE ON PROGRAMMING ENVIRONMENTS K. Koskimies Editor's Foreword 1 J. Paakki, J. Koskinen and A. Salminen From Relational Program Dependencies to Hypertextual Access Structures 3 Abstract Several important aspects of software systems can be expressed as dependencies between their components. A special class of dependencies concentrates on the program text and captures the technical structure and behavior of the target system. The central characteristic making such program dependencies valuable in software engineering environments is that they can be automatically extracted from the program by applying well-known methods of programming language implementation. We present a model of program dependencies by considering them as relations between program elements. Moreover, we show how dependency relations form the basis of producing a graph-like hypertextual representation of programs for a programming environment. Having a general and well-defined model of program dependencies as a foundation makes it easier to systematically construct and integrate language-based tools. As an example application, we present a hypertextual tool which is founded on our relational dependency model and which can be used to maintain programs written in the programming language C. CR Classification G.2.2; D.2.2; D.2.5; H.5.1 Key words: program dependencies, hypertext, software maintenance, reverse engineering S. G. Efremidis, K. A. Mughal, J. H. Reppy and L. Søraas AML: Attribute Grammars in ML 37 Abstract Attribute grammars are a valuable tool for constructing compilers and building user interfaces. This paper reports on a system we are developing, called AML (for Attribution in ML), which is an attribute grammar toolkit for building such applications as language-based programming environments using SML. This system builds on the proven technology of efficient attribute evaluation, while using a higher-level foundation for the implementation of interactive systems. It supports a general and uniform platform for building applications that can manipulate attributed terms and allow access to attribute values. We describe the design of the AML system, its current implementation status, and our plans for the future. CR Classification D.1.1; D.2.6; D.3.4 Key words: attribute grammars, attribute evaluation, functional programming, program generator, programming environments J. Bosch Delegating Compiler Objects: Modularity and Reusability in Language Engineering 66 Abstract The application domain of compiler techniques is changing. Whereas previously compiler techniques were primarily used for the construction of compilers for general purpose languages, now these techniques are increasingly often used for the construction of application domain languages and extensible language models. However, the traditional compiler techniques suffer from problems of complexity, maintainability, reusability and extensibility and new approaches are needed. In this paper, we describe the notion of delegating compiler objects (DCOs), a novel approach to compiler construction that provides structural decomposition and reusability of compiler specifications. Our extensible language, the layered object model, is used to illustrate the advantages of the DCO approach for compiler construction. CR Classification D.3.4 Key words: reusable compiler generators, language engineering, object-oriented compiler technique G. Hedin Attribute Extensions - a Technique for Enforcing Programming Conventions 93 Abstract A problem in supporting reusability of software libraries and frameworks is that the programming conventions which need to be followed are only informally described. Safer reuse would result if these conventions could be enforced, preferably at compile time. This paper presents a technique supporting such enforcement. The technique is based on attribute grammars and allows the construction of extensible compilers and checkers. CR Classification D.1.5; D.2.2; D.3.4 Key words: domain-specific languages, extensible languages, reusability, class libraries, object-oriented frameworks, attribute grammars, extensible compilers. D. I. K. Sjøberg, R. Welland, M. P. Atkinson, P. Philbrow, C. Waite and S. Macneill The Persistent Workshop - a Programming Environment for Napier88 123 Abstract The Persistent Workshop is a programming environment to support the construction of application systems in Napier88, a persistent programming language. In addition to supporting programming activities, the Workshop provides a demonstration of the persistence technology together with an environment in which various experiments can be conducted. The paper gives a brief description of the underlying principles of persistence, which are exploited by the Workshop. The basic structure of the Workshop is described, together with a brief summary of the tools provided within the programming environment, followed by an outline of the internal structure of the Workshop and the use of persistent data structures. Some initial measurements on the usage of the Workshop are briefly presented, and we discuss related work. Finally, the current status of the Workshop is described together with some comments on future work. CR Classification D.2.2; D.2.6 Key words: programming environments, programmer workbench, persistent programming, logging technology VOLUME 4 NUMBER 2 SUMMER 1997 G. Gambosi, A. Postiglione and M. Talamo On-Line Maintenance of an Approximate Bin-Packing Solution 151 Abstract The classical on-line bin-packing problem, unlike typical on-line problems, does not admit any reorganization of its data, i.e. no element can be moved from the bin it was first inserted. In this paper we introduce a new model for this problem, in which an element can possibly be moved in correspondence of the arrival of another element, but the number of movements performed is explicitly considered in the complexity analysis. In the framework of this paradigm, we introduce a new class of O(n log n)-time and O(n)- space algorithms, HARMONIC_REL(M) (where M > 2 is an integer), that, for each prefix of the input list, obtain a new bin assignment performing a limited reorganization of the previous solution, making, at most, cn element movements (c=2 for M=3). All of the algorithms in this class present an 1.5 asymptotic approximation ratio (and an 1.5 OPT + (M-1) absolute ratio), that goes beyond the theoretical 1.536... lower bound for on-line bin-packing in the restricted model. CR Classification: F.2, G.2 Key words: complexity, approximation algorithms, on-line algorithms, bin packing T. Roos New Upper Bounds on Voronoi Diagrams of Moving Points 167 Abstract In d-dimensional Euclidean space E^d, d >= 2, we are given k <= n points moving continuously along given trajectories and n-k points that stay fixed at their position. At each instant, the points define a Voronoi diagram which changes continuously except for certain critical instants, so-called topological events. We improve the currently known upper worst-case bound on the number of topological events for k in O(sqrt(n)) and provide - for the first time - matching upper and lower worst-case bounds in the case of k being some constant. CR Classification: E.1, F.2.2, G.2.1, I.1.2, I.3.5 Key words: analysis of algorithms, computational geometry, Delaunay triangulation, kinematic objects, Voronoi diagram M. Karpinski, W. Rytter and A. Shinohara An Efficient Pattern-Matching Algorithm for Strings with Short Descriptions 172 Abstract We investigate the time complexity of the pattern matching problem for strings which are succinctly described in terms of straight-line programs, in which the constants are symbols and the only operation is the concatenation. Most strings of descriptive size n are of exponential length with respect to n. A complicated algorithm for the equality-test (testing if two succinctly described strings are the same) in O(n^4) time was constructed by Plandowski. Our main aim is to extend this result and we show that a much stronger problem of the pattern-matching for succinctly described strings can be solved with similar complexity: in O(n^4 log n) time. However, Plandowski's algorithm is not used in this paper, and our algorithm gives independent constructive proof. The crucial point in our algorithm is the succinct representation of all periods of a (possibly long) string described in this manner. We also show a (rather straightforward) result that a very simple extension of the pattern-matching problem for shortly described strings is NP-complete. CR Classification: E.4, F.2.2 Key words: pattern matching, text search, compression, straight-line program E. Roche Compact Factorization of Finite-State Transducers and Finite-State Automata 187 Abstract Finite-state transducers and finite-state automata are efficient and natural representations for a large variety of problems. We describe a new algorithm for turning a finite-state transducer into the composition of two deterministic finite-state transducers such that the combined size of the derived transducers can be exponentially smaller than other known deterministic constructions. As a consequence, this can also be used to build deterministic representations of finite-state automata exponentially smaller than the deterministic minimal finite-state automata computed by the classic determinization and minimization algorithms. We also report experimental results on large-scale dictionaries and rule-based systems. CR Classification: E.1, E.2, I.2.7 Key words: finite-state automaton, finite-state transducer, bimachine, regular expression M. Mohri String-Matching with Automata 217 Abstract We present an algorithm to search in a text for the patterns of a regular set. Unlike many classical algorithms, we assume that the input of the algorithm is a deterministic automaton and not a regular expression. Our algorithm is based on the notion of failure function and mainly consists of efficiently constructing a new deterministic automaton. This construction is shown to be efficient. In particular, its space complexity is linear in the size of the obtained automaton. CR Classification: F.1.1, F.2.0, F.2.2, F.4.3 Key words: finite automata, pattern-matching, strings VOLUME 4 NUMBER 3 FALL 1997 L. Lundberg and H. Lennerstad Bounding the Gain of Changing the Number of Memory Modules in Shared Memory Multiprocessors 233 Abstract We consider a multiprocessor, with p processors and m memory modules. If more than one processor want to access a memory module simultaneously, these accesses will be serialized due to memory contention. The completion time of a program executing on this system is thus affected by the way addresses are mapped onto memory modules, and finding a mapping which results in minimal completion time is NP-hard. If we change the number of memory modules from m to m', while keeping the number processors constant, we will generally change the minimal completion time. The gain of changing the number of memory modules from m to m' for a certain program is defined as the ratio between the minimal completion times, using m and m' modules respectively. Here, we present an optimal upper bound on the maximum gain of changing the number of memory modules, as a function of m, m' and p, including the case when m' is infinitely large. The bound is obtained by investigating a mathematical formulation. The mathematical tools involved are essentially elementary combinatorics. The formula for calculating the bound is mathematically complicated but can be rapidly computed for reasonable m, m' and p. This bound makes it possible to do price-performance trade-offs and to compare any mapping of addresses to memory modules with the optimal case. The results are applicable to different multiprocessor architectures, e.g. systems with crossbar networks and systems with multiple buses. The results also make it possible to calculate optimal performance bounds for multiprocessors containing cache memories, as well as for multiprocessors with no cache memories. Moreover, we discuss how the results can be used for calculating bounds for programs with as well as without synchronizations. CR Classification: C.1.2, C.4, G.2.1 Key words: performance bound, memory modules, multiprocessor, crossbar network, combinatorics J. Palsberg and T. Jim Type Inference with Simple Selftypes is NP-complete 259 Abstract The metavariable self is fundamental in object-oriented languages. Typing self in the presence of inheritance has been studied by Abadi and Cardelli, Bruce, and others. A key concept in these developments is the notion of selftype, which enables flexible type annotations that are impossible with recursive types and subtyping. Bruce et al. demonstrated that, for the language TOOPLE, type checking is decidable. Open until now is the problem of type inference with selftype. In this paper we present a simple type system with selftype, recursive types, and subtyping, and we prove that type inference is NP-complete. With recursive types and subtyping alone, type inference is known to be P-complete. Our example language is the object calculus of Abadi and Cardelli. Both our type inference algorithm and our lower bound are the first such results for a type system with selftype. CR Classification: D.3.2; F.3.3 Key words: type inference, constraints T. Gyimóthy and T. Horváth Learning Semantic Functions of Attribute Grammars 287 Abstract Attribute grammars can be considered as an extension of context-free grammars, where the attributes are associated with grammar symbols, and the semantic rules define the values of the attributes. This formalism is widely applied for the specification and implementation of the compilation-oriented languages. The paper presents a method for learning semantic functions of attribute grammars which is a hard problem because semantic functions can also represent relations. The method uses back- ground knowledge in learning semantic functions of S-attributed and L-attributed grammars. The given context-free grammar and the background knowledge allow one to restrict the space of relations and give a smaller representation of data. The basic idea of this method is that the learning problem of semantic functions is transformed to a propositional form and the hypothesis induced by a propositional learner is transformed back into semantic functions. CR Classification: D.3.2, I.2.6 Key words: attribute grammar, machine learning, inductive logic programming, attribute-value learner A. Yamaguchi, K. Nakano and S. Miyano An Approximation Algorithm for the Minimum Common Supertree Problem 303 Abstract The minimum common supertree problem is to find a minimum k-ary common supertree for a given set T of labeled complete k-ary trees. This problem is an NP-hard problem. This paper presents a polynomial-time approximation algorithm for solving this problem in O(n^3 log n) time, where n is the total number of edges of trees in T. We prove that the algorithm constructs a common supertree that is at most 1+1/(2k-2) times as large as the minimum one. CR Classification: F.2.2, G.2.2 Key words: approximation algorithm, NP-complete, minimum common supertree problem, labeled trees VOLUME 4 NUMBER 4 WINTER 1997 A. Israeli, E. Kranakis, D. Krizanc and N. Santoro Time-Message Trade-Offs for the Weak Unison Problem 317 Abstract A set of anonymous processors is interconnected forming a complete synchronous network with sense of direction. Weak unison is the problem where all processors want to enter the same state (in our case "wakeup" state) in the absence of a global start-up signal. As measure of complexity of the protocols considered we use the "bits" times "lag" measure, i.e. the total number of (wakeup) messages transmitted throughout the execution of the protocol times the number of steps which are sufficient in order for all the processors to wakeup. We study trade-offs in the complexity of such algorithms under several conditions on the behavior of the processors (oblivious, non-oblivious, balanced, etc) and provide tight upper and lower bounds on the time times #messages measure. CR Classification: C.2.1 Key words: anonymous network, balanced, chordal rings, t-step protocol, non-oblivious, oblivious, time-message complexity, unbalanced, unison, wakeup protocol, weak unison A. Mukhopadhyay, A. Agrawal and R. M. Hosabettu On the Ordinary Line Problem in Computational Geometry 330 Abstract Let P be a set of n points in the plane. A connecting line of P is a line that passes through at least two of its points. A connecting line is called ordinary if it is incident on exactly two points of P. If the points of P are not collinear then such a line exists. In fact, there are Omega(n) such lines. In this paper, we present two O(n log n) time algorithms for finding an ordinary line, assuming that the points of P are not collinear. We also present an optimal O(n^2) time algorithm for finding all such ordinary lines. CR Classification: F.2.2, Y.1.3, I.3.5 Key words: computational geometry, ordinary line, parametric searching M. Pal and G. P. Bhattacharjee An Optimal Parallel Algorithm for All-Pairs Shortest Paths on Unweighted Interval Graphs 342 Abstract A cost optimal parallel algorithm is presented to find the all-pairs shortest paths on an unweighted interval graph which takes O(n/p + log n) time on an EREW PRAM, where np and n represent respectively the number of processors and the number of vertices of the graph. CR Classification: E.1, F.2, G.2.2, I.1.2. Key words: design and analysis of algorithms, parallel algorithm, interval graph, all-pairs shortest paths, EREW PRAM O. Kaser Optimal Height Reduction Problems for Tree-Structured Hierarchies 357 Abstract Hierarchical descriptions of objects may be restructured by selective inlining. This paper considers the improvement of tree-structured hierarchies. Trade-offs are observed between the increased size of an inlined description, and its reduced number of hierarchical levels. Optimization problems are formulated where the minimum size expansion is incurred while obtaining a desired height reduction. Two varieties of inlining are considered, and weighted graphs are used to model the optimization problem. For one variety of inlining, an O(n sqrt(H log n))-time algorithm is described; it can reduce an n-cell hierarchy to H levels, if the structure of the hierarchy is linear. For general trees, an O(H n^2)-time algorithm is given. With the other variety of inlining, the linear problem is solved in O(H^2 n^3) time, while for general trees with certain arc-weight constraints, a 2-approximate algorithm runs in O(H n^2) time. Related open problems are given and applications are discussed. CR Classification: F.2.2, G.2.2, I.3.5 Key words: hierarchy, restructuring, inlining, optimization, graphs, trees, algorithms H. Erdogmus Architecture-Driven Verification of Concurrent Systems 380 Abstract This paper proposes a method to construct a set of proof obligations from the architectural specification of a concurrent system. The architectural specifications used express correctness requirements of a concurrent system at a high level without any reference to component functionality. Then the proof obligations derived from such specifications are discharged as model checking tasks in a suitable behavioral model where components are assigned their respective functionalities. An experimental extension to the SPIN tool is used as the model checker. The block diagram notation used to specify architectures allows interchangeable components with equivalent intended functionalities to be encapsulated within a representative module. A proof obligation of such a system is discharged as an equivalence checking task in the behavioral model chosen. It is shown how infeasible proof obligations can be decomposed by decomposing the architectural specification. Obligation decomposition relies on assume-guarantee conditions. CR Classification: D.2.1, D.2.4, F.3.1, C.2.4 Key words: architecture-based verification, architectural specifications, architectural formalisms, model checking, equivalence checking, compositional verification. VOLUME 3 NUMBER 1 SPRING 1996 Y. H. Tsin and C.-A. Wang Geodesic Voronoi Diagrams in the Presence of Rectilinear Barriers 1 Abstract The problem of constructing the geodesic Voronoi diagram for a set of sites S with a set of parallel line segments O as obstacles is addressed and an O((m+n)log(m+n)) time and O(m+n) space algorithm is presented for constructing the diagram, where |S|=n and |O|=m. The algorithm is a plane-sweep algorithm which does not use geometric transformation. It uses two plane sweeps, advancing from two opposite directions, to produce two data structures, called the shortest path maps. The two maps are then tailored to produce the desired geodesic Voronoi diagram. When m=0, the algorithm produces the original Voronoi diagram for the sites S in O(n log n) time and O(n) space, and when the sites in S are assigned weights, a minor modification of the algorithm can construct the weighted Voronoi diagram for S in O(n log n) time and O(n) space. CR Classification: I.2.3, F.2.2 Key words: geodesic Voronoi diagrams, geodesic distance, proximity, computational geometry, analysis of algorithms, plane-sweep J. Katajainen, T. Pasanen and J. Teuhola Practical In-Place Mergesort 27 Abstract Two in-place variants of the classical mergesort algorithm are analysed in detail. The first, straightforward variant performs at most N\log_2 N + O(N) comparisons and 3N\log_2 N + O(N) moves to sort N elements. The second, more advanced variant requires at most N\log_2 N + O(N) comparisons and epsilon N\log_2 N moves, for any fixed epsilon > 0 and any N > N(epsilon). In theory, the second one is superior to advanced versions of heapsort. In practice, due to the overhead in the index manipulation, our fastest in-place mergesort behaves still about 50 per cent slower than the bottom-up heapsort. However, our implementations are practical compared to mergesort algorithms based on in-place merging. CR Classification: F.2.2 Key words: sorting, mergesort, in-place algorithms H. Seidl Least and Greatest Solutions of Equations over N 41 Abstract We consider the problem of computing least and greatest solutions of a system of equations x_i = f_i, i = 1,..., n, over N, i.e., the naturals (extended by infty), where the right hand sides f_i are expressions built up from constants and variables by various sets of operations. We present efficient algorithms in case where the following operations occur: (1) minimum and maximum; (2) maximum, addition and multiplication; (3) minimum, addition and multiplication; (4) minimum, maximum, addition and multiplication. We extend the methods to the cases where (one-sided) conditionals are allowed as well. CR Classification: D.3.4, F.4.1, G.2.1 Key words: equations over integers, least or greatest solution, program analysis S. G. Akl and I. Stojmenovic Generating t-ary Trees in Parallel 63 Abstract We present a cost-optimal parallel algorithm for generating t-ary trees. Using a known inversion table representation, our algorithm generates all tree sequences in lexicographic order. It uses a linear array of n processors (where n is the number of nodes in the tree), each having a constant number of registers (each storing an integer of size at most tn), and each being responsible for producing one element of a given tree sequence. CR Classification: G.1.0, G.2.1, G.2.2 Key words: binary tree, t-ary tree, parallel algorithms, breadth first search, linear array of processors, parallel random access machine, lexicographic order A. Datta, A. Maheshwari and J.-R. Sack Optimal Parallel Algorithms for Direct Dominance Problems 72 Abstract We present optimal parallel solutions to direct dominance problems for planar point sets and provide an application. The algorithms presented here are deterministic and designed to run on the concurrent read exclusive write parallel random-access machine (CREW PRAM). In particular, we provide algorithms for counting the number of points that are directly dominated by each point of an n-point set and for reporting these point sets. The counting algorithm runs in O(log n) time using O(n) processors; the reporting algorithm runs in O(log n) time using O(n + k/log n) processors, where k is the size of the output. The parallel work of our algorithms matches the time complexity of known optimal sequential algorithms. As an application of our results, we present a parallel algorithm for the maximum empty rectangle problem. CR Classification: F.2.2 Key words: dominance problems, parallel algorithms, computational geometry, direct dominance VOLUME 3 NUMBER 2 SUMMER 1996 C.-Z. Lin, C.-C. Tseng, Y.-L. Chen and T.-W. Kuo A Systematic Approach to Synthesize Data Alignment Directives for Distributed Memory Machines 89 Abstract In this paper, we propose a systematic method to synthesize the alignment and realignment directives for data-parallel languages. By using several heuristic rules, this method can determine the multiple alignment relation of the given program. This method can be carried out in polynomial time. An NAS parallel benchmark program was selected as a benchmark and compiled for execution on a Transputer system. The result shows that the execution time of the multiple alignment version of the given program obtained by the proposed method is smaller than the single alignment version of the given program. CR Classification: D.3.4, I.2.2, D.3.2 Key words: parallel processing, compilers, data management, performance evaluation, multiprocessors L. Malmi A New Method for Updating and Rebalancing Tree-Type Main Memory Dictionaries 111 Abstract The most important methods for balancing search trees are periodical rebuilding of the whole tree, which takes Omega(N) time, and rebalancing the tree during each update operation, in O(log N) time. Recently, a new method called relaxed balancing has been proposed in which balancing and updates are uncoupled. Updates only leave balance information in the tree, thus enabling rebalancing later. In this paper, new algorithms for updating and rebalancing the tree, based on using the relaxed balance information, are given. It is shown that if M up-dates are performed in a red-black tree with N keys, the tree can be rebalanced in O(M \log (N + M)) time. If the tree is originally empty, the rebalancing is performed in O(M) time. CR Classification: E.1, F.2.2, G.2.2 Key words: algorithms, balanced trees, data structures H. Linnestad Fatal Steps of Knuth-Bendix Completion 131 Abstract Extended completion of term rewriting systems may abort or succeed for given input, depending on the strategy used for selecting a derivation step from the set of possible ones. Thus, some derivation steps are fatal in the sense that they destroy the possibility of a successful derivation sequence. In this paper we characterize such steps. The results presented are of relevance to efficiency improvement, especially for implementations applying backtracking. CR Classification: F.4.2 Key words: term rewriting systems, Knuth-Bendix completion, equational theories, rewriting modulo a congruence F. Nielson and H. R. Nielson Operational Semantics of Termination Types 144 Abstract In principle termination analysis is easy: find a well-founded ordering and prove that calls decrease with respect to the ordering. We show how to embed termination information into a polymorphic type system for an eager higher-order functional language allowing multiple-argument functions and algebraic data types. The well-founded orderings are defined by pattern matching against the definition of the algebraic data types. We prove that the analysis is semantically sound with respect to a big-step (or natural) operational semantics. We compare our approach based on operational semantics to one based on denotational semantics and we identify the need for extending the semantic universe with new constructs whose sole purpose is to facilitate the proof. For dealing with partial correctness it suffices to consider approximations that are less defined than the desired fixed points; for dealing with total correctness we introduce functions that are more defined than the fixed points. CR Classification: D.3.1, D.3.2, F.3.1, F.3.2 Key words: semantics of programming languages, annotated type and effect systems, proof techniques for operational semantics, higher-order functional languages K. Diks and A. Pelc Fault-Tolerant Linear Broadcasting 188 Abstract In linear broadcasting, packets originally stored in one node, called the source, have to visit all other nodes of the network. Every packet has a predetermined route indicating in which order it visits the nodes. A faulty link or node of the network destroys all packets passing through it. A linear broadcasting scheme consisting of packets' routes is f-fault-tolerant if every fault-free node is visited by at least one packet for any configuration of at most f link or node failures. We estimate the minimum number of packets for which there exists an f-fault-tolerant linear broadcasting scheme in complete networks, and we construct schemes using few packets. Variations of this problem when faults can occur only in links or only in nodes are also considered. CR Classification: C.2.0, C.4, G.2.1, G.4 Key words: broadcasting, fault-tolerance, packet, route VOLUME 3 NUMBER 3 FALL 1996 C. A. Brown, L. Finkelstein and P. W. Purdom Jr. Backtrack Searching in the Presence of Symmetry 203 Abstract A class of search algorithms for problems with symmetries is discussed. The algorithms take as input a symmetry group (in the form of a permutation group) and use it to avoid repeated search of equivalent portions of the search space. The symmetry checking algorithms are variations on a color automorphism algorithm and make use of recent advances in computational group theory to achieve efficient performance. In particular, this paper gives the first algorithm that combines search rearrangement with an arbitrary symmetry group. Experimental results confirm that the algorithms save a considerable amount of time on symmetric search problems. CR Classification: G.2.1, I.1.2, I.2.8 Key words: backtracking, symmetry group, search rearrangement backtracking P. Eades, M. Keil, P. D. Manuel and M. Miller Two Minimum Dominating Sets with Minimum Intersection in Chordal Graphs 220 Abstract We prove that the problem of finding two minimum dominating sets (connected dominating sets or clique transversals) with minimum intersection is polynomially solvable on interval graphs. Furthermore, the problem of deciding whether or not there exist two disjoint minimum dominating sets (connected dominating sets or clique transversals) is shown to be NP-hard for chordal graphs. CR Classification: F.2.2, G.2.2 Key words: chordal graph, interval graph, dominating set, clique transversal G. S. Brodal Partially Persistent Data Structures of Bounded Degree with Constant Update Time 238 Abstract The problem of making bounded in-degree and out-degree data structures partially persistent is considered. The node copying method of Driscoll et al. is extended so that updates can be performed in worst-case constant time on the pointer machine model. Previously it was only known to be possible in amortised constant time. The result is presented in terms of a new strategy for Dietz and Raman's dynamic two player pebble game on graphs. It is shown how to implement the strategy and the upper bound on the required number of pebbles is improved from 2b+2d+O(sqrt{b}) to d+2b, where b is the bound of the in-degree and d the bound of the out-degree. We also give a lower bound that shows that the number of pebbles depends on the out-degree d. CR Classification: E.1, F.2.2 Key words: data structures, partial persistence, pebble game, lower bounds S. Natarajan and A. P. Sprague Disjoint Paths in Circular Arc Graphs 256 Abstract Arikati, Pandu Rangan, and Manacher (BIT, 31 (1991) 182-193) developed an O(m+n) algorithm to find two vertex disjoint paths in a circular arc graph on n vertices and m edges. We provide an improved solution to this problem: the algorithm presented here is both faster (O(n) time complexity) and simpler than the previous algorithm. The method involves reductions to interval graphs. In an interval graph, the critical notions are unordered paths (vertex disjoint paths from s_1,s_2 to t_1,t_2 in either order) and interchangeable paths (existence of both pairs of vertex disjoint paths). We also prove a theorem (which is best possible, in a sense), that guarantees existence of vertex disjoint paths, if arcs are sufficiently dense. Finally, we show that the more general problem of determining the existence of k vertex disjoint paths (from s_1,...,s_k to t_1,...,t_k) is NP-complete, where k is part of the input, even restricted to the class of interval graphs. CR Classification: G.2.2 Key words: vertex disjoint paths, circular arc graphs, interval graphs, NP-complete J.-C. Chen Proportion Split Sort 271 Abstract This paper introduces a new sorting algorithm called PROPORTION SPLIT SORT. The algorithm splits a sequence into two blocks in the ratio of 1:p-1, where p is a fixed constant, then divides them into four blocks with the median of the first block such that the maximum of the left two blocks is less than or equal to the minimum of the right two blocks. Finally, we perform the sorting of the left two blocks and the right two blocks separately. The worst case number of comparisons of this sorting algorithm is bounded by 1/log(2p/(2p-1))n log n for p>1. In our simulation, when p=2, the average number is close to ( n log n-1.2n ), and for some p (e.g. p=16), this algorithm is faster than CLEVER QUICKSORT which is referred to as the fastest sorting algorithm. CR Classification: F.2.2 Key words: algorithm, partition, sort, quick sort, insertion sort C.-H. Ang and H. Samet Approximate Average Storage Utilization of Bucket Methods with Arbitrary Fanout 280 Abstract The approximate average storage utilization of bucket methods with fanout of n, assuming a uniform distribution of data, is shown to depend only on the fanout and not on any other factors such as the directory structure. It obeys the formula (ln n)/(n-1). The analysis makes use of a generalization of previously published methods for n=2 and n=4 and its predictions match these results. The formula is applicable to methods that are based on digital searching (e.g., tries) or balancing rather than comparison-based methods. The formula is also used to detect an erroneous statement about the average storage utilization of a ternary system in Nievergelt, J., Hinterberger, H., and Sevcik, K. C. 1984. The grid file: an adaptable, symmetric multikey file structure. ACM Transactions on Database Systems 9, 1 (March), 38--71. CR Classification: D.4.3, E.1, E.2, F.2.2, H.2.2 Key words: data structures, analysis of algorithms, bucket methods, average storage utilization, B-tree, Grid file, EXHASH, EXCELL, quadtries, ternary system VOLUME 3 NUMBER 4 WINTER 1996 SELECTED PAPERS OF THE 5TH SCANDINAVIAN WORKSHOP ON ALGORITHM THEORY (SWAT'96), JULY 3-5, 1996: R. Karlsson and A. Lingas Guest Editors' Foreword 293 T. W. Lam, W. K. Sung and H. F. Ting Computing Unrooted Maximum Subtrees in Sub-quartic Time 295 Abstract This paper presents an O(n^{1.75 + o(1)})$ time algorithm for the Unrooted Maximum Agreement Subtree (MAST) problem: Given a set A of n items (e.g. species) and two unrooted trees T and T', each with n leaves uniquely labeled by the items of A, we want to compute the largest subset B of A such that the subtrees of T and T' induced by B are isomorphic.The UMAST problem is closely related to some problems in biology, in particular, the one of finding the consensus between evolutionary trees (or phylogenies) of a set of species. CR Classification: E.1, F.2.2 Key words: comuptational biology, evolutionary tree, unrooted maximum agreement subtree, tree isomorphism T. Husfeldt, T. Rauhe and S. Skyum Lower Bounds for Dynamic Transitive Closure, Planar Point Location, and Parantheses Matching 323 Abstract We give a number of new lower bounds in the cell probe model with logarithmic cell size, which entails the same bounds on the random access computer with logarithmic word size and unit cost operations. We study the signed prefix sum problem: given a string of length n of 0s and signed 1s, compute the sum of its i:th prefix during updates. We show a lower bound of Omega(log n/log log n) time per operations, even if the prefix sums are bounded by log n/log log n during all updates. We also show that if the update time is bounded by the product of the worst-case update time and the answer to the query, then the update time must be Omega(sqrt(log n/log log n)). These results allow us to prove lower bounds for a variety of seemingly unrelated dynamic problems. We give a lower bound for the dynamic planar point location in monotone subdivisions of Omega(log n/log log n) per operation. We give a lower bound for dynamic transitive closure on upward planar graphs with one source and one sink of Omega(log n/(log log n)^2) per operation. We give a lower bound of Omega(sqrt(log n/log log n)) for the dynamic membership problem of any Dyck language with two or more letters. This implies the same lower bound for the dynamic word problem for the free group with k generators. We also give lower bounds for certain range searching variants and for the dynamic prefix majority and prefix equality problems. CR Classification: F.1.1, F.2.2 Key words: cell probe model, lower bounds, dynamic graph algorithms, Dyck languages G. S. Brodal, S. Chaudhuri and J. Radhakrishnan The Randomized Complexity of Maintaining the Minimum 337 Abstract The complexity of maintaining a set under the operations Insert, Delete and FindMin is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n/(e2^{2t})-1 comparisons for FindMin. If FindMin is replaced by a weaker operation, FindAny, then it is shown that a randomized algorithm with constant expected cost per operation exists; in contrast, it is shown that no deterministic algorithm can have constant cost per operation. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given. CR Classification: F.2.2 D. Fernández-Baca, G. Slutzki and D. Eppstein Using Sparsification for Parametric Minimum Spanning Tree Problems 352 Abstract Two applications of sparsification to parametric computing are given. The first is a fast algorithm for enumerating all distinct minimum spanning trees in a graph whose edge weights vary linearly with a parameter. The second is an asymptotically optimal algorithm for the minimum ratio spanning tree problem, as well as other search problems, on dense graphs. CR Classification: F 2.2, G 2.2 Key words: algorithms, graphs, minimum ratio optimization, minimum spanning trees, parametric problems, sparsification M. V. Marathe, R. Ravi and R. Sundaram Service-Constrained Network Design Problems 367 Abstract Several practical instances of network design problems often require the network to satisfy multiple constraints. In this paper, we focus on the following problem (and its variants): find a low-cost network,under one cost function, that services every node in the graph, under another cost function, (i.e., every node of the graph is within a prespecified distance from the network). Such problems find applications in optical network design and the efficient maintenance of distributed databases. We utilize the framework developed in Marathe et al. [1995] (Marathe, M. V., Ravi, R., Sundaram, R., Ravi, S. S., Rosenkrantz, D. J., and B., Hunt H. 1995. Bicriteria network design problems. In Proceedings of the International Colloquium on Automata, Languages and Programming, LNCS 944, 487-498.) to formulate these problems as bicriteria network design problems, and present approximation algorithms for a class of service-constrained network design problems. We also present lower bounds on the approximability of these problems that demonstrate that our performance ratios are close to best possible. CR Classification: G.2.2 Key words: approximation algorithms, bicriteria problems, spanning trees, network design, combinatorial algorithms T. Asano, T. Ono and T. Hirata Approximation Algorithms for the Maximum Satisfiability Problem 388 Abstract The maximum satisfiability problem (MAX SAT) is the following: given a set of clauses with weights, find a truth assignment that maximizes the sum of the weights of the satisfied clauses. In this paper, we present approximation algorithms for MAX SAT, including a 0.76544- approximation algorithm. The previous best approximation algorithm for MAX SAT was proposed by Goemans-Williamson and has a performance guarantee of 0.7584. Our algorithms are based on semidefinite programming and the 0.75-approximation algorithms of Yannakakis and Goemans-Williamson. CR Classification: G.1.6, G.2.1, G.3, I.1.2 Key words: approximation algorithms, maximum satisfiability, network flows, randomized algorithms, semidefinite programming H. C. Lau and O. Watanabe Randomized Approximation of the Constraint Satisfaction Problem 405 Abstract We consider the Weighted Constraint Satisfaction Problem (W-CSP) which is a fundamental problem in Artificial Intelligence and a generalization of important combinatorial problems such as MAX CUT and MAX SAT. In this paper, we prove non-approximability properties of W-CSP and give improved approximations of W-CSP via randomized rounding of linear programming and semidefinite programming relaxations. Our algorithms are simple to implement and experiments show that they are run-time efficient. CR Classification: F.2.2, I.2.8, F.1.3 Key words: approximation algorithms, constraint satisfaction problem, randomized rounding N. Alon, P. Kelsen, S. Mahajan and H. Ramesh Coloring 2-colorable Hypergraphs with a Sublinear Number of Colors 425 Abstract A coloring of a hypergraph is a mapping of vertices to colors such that no hyperedge is monochromatic. We are interested in the problem of coloring 2-colorable hypergraphs. For the special case of graphs (hypergraphs of dimension 2) this can easily be done in linear time. The problem for general hypergraphs is much more difficult since a result of Lov?ász implies that the problem is NP-hard even if all hyperedges have size three. In this paper we develop approximation algorithms for this problem. Our first result is an algorithm that colors any 2-colorable hypergraph on n vertices and dimension d with O(n^{1-1/d} log^{1-1/d}n) colors. This is the first algorithm that achieves a sublinear number of colors in polynomial time. This algorithm is based on a new technique for reducing degrees in a hypergraph that should be of independent interest. For the special case of hypergraphs of dimension three we improve on the previous result by obtaining an algorithm that uses only O(n^{2/9} log^{17\over 8} n) colors. This result makes essential use of semidefinite programming. We further show that the semidefinite programming approach fails for larger dimensions. CR Classification: F.2 Key words: approximation algorithms, hypergraphs, semidefinite programming Author Index Volume 3 (1996) 440 VOLUME 2 NUMBER 1 SPRING 1995 E. Ukkonen Editor's Foreword 1 M. Golin, R. Raman, C. Schwartz, and M. Smid Simple Randomized Algorithms for Closest Pair Problems 3 Abstract We present a conceptually simple, randomized incremental algorithm for finding the closest pair in a set of n points in D-dimensional space, where D >= 2 is a fixed constant. Much of the work reduces to maintaining a dynamic dictionary. Using dynamic perfect hashing to implement the dictionary, the closest pair algorithm runs in O(n) expected time. Alternatively, we can use balanced search trees, and stay within the algebraic computation tree model, which yields an expected running time of O(n log n). In addition to being quick on the average, the algorithm is reliable: we show that the high- probability bound increases only slightly to O(n log n / log log n) if we use hashing and even remains O(n log n) if we use trees as our dictionary implementation. Finally, we give some extensions to the fully dynamic closest pair problem, and to the k closest pair problem. CR Classification: F.2.2 Key words: randomized algorithms, closest pair problem V. Kamakoti, K. Krithivasan and C. Pandu Rangan An Efficient Randomized Algorithm for the Closest Pair Problem on Colored Point Sets 28 Abstract We present a simple randomized algorithm for finding the Closest Foreign Pair, in a set of n points in D-dimensional space, where D >= 2 is a fixed constant and the distances are measured in the L_1 and the L_infty metric. The algorithm runs in O(n log ^{D-1}n /log log n) time with high probability for the L_infty metric and in O(n log ^{2^{D-1} - 1}n / log log n) time with high probability for the L_1 metric. CR Classification: I.1.2, I.3.5 Key words: closest foreign pair, computational geometry, randomized algorithms J. Kratochvíl, P. D. Manuel and M. Miller Generalized Domination in Chordal Graphs 41 Abstract We discuss the computational complexity of generalized domination problems, which were introduced in [J.A.Telle: Complexity of domination-type problems in graphs, Nordic Journal of Computing 1 (1994), 157-171], restricted to chordal and interval graphs. The existence problem, parametrized by two sets of nonnegative integers sigma and rho, asks for the existence of a set S of vertices of a given graph such that for every vertex u in S (or u not in S, the number of neighbors of u which are in S is in sigma (in rho, respectively). Telle proved that this problem is NP-complete for general graphs, provided both sigma and rho are finite and 0 not in rho. One of our main results shows that in such cases, the existence problem is polynomially solvable for interval graphs. On the other hand, for chordal graphs, the complexity of the existence problem varies significantly even when sigma and rho contain one element each. CR Classification: F.2.2, G.2.2 Key words: chordal graph, interval graph, domination, computational complexity V. Leppänen and M. Penttonen Work-Optimal Simulation of PRAM Models on Meshes 51 Abstract Memory access is the bottleneck in the simulation of shared memory on a distributed memory model. We show that EREW, CREW, and CRCW PRAM models can be simulated work-optimally on a mesh-connected routing machinery that is coated with processors. The simulation uses a combination of well-known techniques such as randomized hashing, greedy routing, and parallel slackness. The advantage of our mesh based simulation is the simple and scalable structure of the mesh combined with good performance. A theoretical analysis and practical experiments show that our simulation is economical and efficient in comparison with other proposed architectures, such as butterflies and hypercubes, for all feasible numbers of processors. CR Classification: C.1.2, C.2.1, F.1.2, F.2.2, G.3 Key words: PRAM, mesh, shared memory, simulation, work-optimal M. Vassilakopoulos, Y. Manolopoulos and B. Kröll Efficiency Analysis of Overlapped Quadtrees 70 Abstract The technique of overlapping region quadtrees results in significant space reduction. This new form of data structure is suitable for representing sequences of similar binary raster images emerging in many areas. Apart from a short introduction to the method of overlapping, an analysis of the space reduction achieved is presented in this article. First, a general theorem about the average number of common nodes of two images obeying a general model is given. Next, two models of image similarity are introduced and the respective formulae are derived from the general one. Evaluating these formulae for various parameters shows the efficiency of overlapping. CR Classification: E.1, E.2, F.2.2, G.2.1, G.2.2, H.2.2 Key words: algorithms, performance, quadtrees, random images, overlapping data stuctures VOLUME 2 NUMBER 2 SUMMER 1995 SELECTED PAPERS OF THE 5TH INTERNATIONAL CONFERENCE ON CONCUR '94: CONCURRENCY THEORY, AUGUST 22-25, 1994: B. Jonsson and J. Parrow Guest Editors Foreword 87 O. Burkart and B. Steffen Composition, Decomposition and Model Checking of Pushdown Processes 89 Abstract In this paper, we consider a strict generalization of context-free processes, the pushdown processes, which are particularly interesting for three reasons: First, they are closed under parallel composition with finite state systems. This is shown by proving a new expansion theorem, whose implied `representation explosion' is no worse than for finite state systems. Second, they are the smallest extension of context-free processes allowing parallel composition with finite state processes, which we prove by showing that every pushdown process is bisimilar to a (relabelled) parallel composition of a context-free process (namely a stack) with some finite process. Third, they can be checked by means of an elegant adaptation to pushdown automata of the second order model checker known for context-free processes. As arbitrary parallel composition between context-free processes provides Turing power, and therefore destroys every hope for automatic verification, pushdown processes can be considered as the appropriate generalization of context-free processes for frameworks for automatic verification. CR Classification: F.1.1, F.1.2, F.3.1, F.4.2 Key words: infinite-state systems, context-free processes, pushdown processes, parallel composition, model-checking P. Di Gianantonio, F. Honsell and G. Plotkin Uncountable Limits and the Lambda Calculus 126 Abstract In this paper we address the problem of solving recursive domain equations using uncountable limits of domains. These arise for instance, when dealing with the omega_1-continuous function-space constructor and are used in the denotational semantics of programming languages which feature unbounded choice constructs. Surprisingly, the category of cpo's and omega_1-continuous embeddings is not omega_0-cocomplete. Hence the standard technique for solving reflexive domain equations fails. We give two alternative methods. We discuss also the issue of completeness of the lambda beta eta-calculus w.r.t reflexive domain models. We show that among the reflexive domain models in the category of cpo's and omega_0-continuous functions there is one which has a minimal theory. We give a reflexive domain model in the category of cpo's and omega_1-continuous functions whose theory is precisely the lambda beta eta theory. So omega_1-continuous lambda-models are complete for the lambda beta eta-calculus. CR Classification: F.3.2, F.4.1, D.3.3 Key words: countable non-determinism, denotational semantics, domain equations, lambda-calculus P. C. Kanellakis, D. Michailidis and A. A. Shvartsman Controlling Memory Access Concurrency in Efficient Fault-Tolerant Parallel Algorithms 146 Abstract The CRCW PRAM under dynamic fail-stop (no restart) processor behavior is a fault-prone multiprocessor model for which it is possible to both guarantee reliability and preserve efficiency. To handle dynamic faults some redundancy is necessary in the form of many processors concurrently performing a common read or write task. In this paper we show how to significantly decrease this concurrency by bounding it in terms of the number of actual processor faults. We describe a low concurrency, efficient and fault-tolerant algorithm for the Write-All primitive: ``using <= N processors, write 1's into N locations''. This primitive can serve as the basis for efficient fault-tolerant simulations of algorithms written for fault-free PRAMs on fault-prone PRAMs. For any dynamic failure pattern F, our algorithm has total write concurrency <= |F| and total read concurrency <= 7 |F| log N, where |F| is the number of processor faults (for example, there is no concurrency in a run without failures); note that, previous algorithms used Omega(N log N) concurrency even in the absence of faults. We also describe a technique for limiting the per step concurrency and present an optimal fault-tolerant EREW PRAM algorithm for Write-All, when all processor faults are initial. CR Classification: C.1.2, F.1.2, F.2.3 Key words: parallel computation, fault-tolerance, concurrency, robust algorithms N. P. Mendler, P. Panangaden, P. J. Scott and R. A. G. Seely A Logical View of Concurrent Constraint Programming 181 Abstract Concurrent Constraint Programming (CCP) has been the subject of growing interest as the focus of a new paradigm for concurrent computation. Like logic programming it claims close relations to logic. In fact CCP languages are logics in a certain sense that we make precise in this paper. In recent work it was shown that the denotational semantics of determinate concurrent constraint programming languages forms a fibred categorical structure called a hyperdoctrine, which is used as the basis of the categorical formulation of first-order logic. What this shows is that the combinators of determinate CCP can be viewed as logical connectives. In this paper we extend these ideas to the operational semantics of such languages and thus make available similar analogies for a much broader variety of languages including indeterminate CCP languages and concurrent block-structured imperative languages. CR Classification: F3.1, F3.2, D1.3, D3.3 Key words: concurrent constraint programming, simulation, logic, categories, hyperdoctrines M. Nielsen and C. Clausen Games and Logics for a Noninterleaving Bisimulation 221 Abstract A categorical definition of bisimulation, applicable to a wide range of models in concurrency with an accompanying notion af observations, was recently suggested by Joyal, Nielsen and Winskel. The definition is in terms of span of open maps, and it coincides with Park and Milner's strong bisimulation for the standard model of labelled transition systems with sequential observations. Here, we briefly present the general set-up, and discuss its applications. For the model of transition systems with independence and nonsequential observations, the associated notion of bisimulation was shown to be a slight strengthening of the history preserving bisimulations of Rabinovich and Trakhtenbrot. Furthermore, it turns out that this bisimulation has game theoretic and logical characterizations in the form of pleasantly simple modifications of well- known characterizations of standard strong bisimulation. CR Classification: F.1.2, F.4.1 Key words: concurrency, bisimulation, games, logics R. Segala and N. Lynch Probabilistic Simulations for Probabilistic Processes 250 Abstract Several probabilistic simulation relations for probabilistic systems are defined and evaluated according to two criteria: compositionality and preservation of ``interesting'' properties. Here, the interesting properties of a system are identified with those that are expressible in an untimed version of the Timed Probabilistic concurrent Computation Tree Logic (TPCTL) of Hansson. The definitions are made, and the evaluations carried out, in terms of a general labeled transition system model for concurrent probabilistic computation. The results cover weak simulations, which abstract from internal computation, as well as strong simulations, which do not. CR Classification: D.2.1, D.2.4, D.3.1, F.1.2, Key words: probabilistic automata, simulation method, specification, temporal logic, verification C. Verhoef A Congruence Theorem for Structured Operational Semantics with Predicates and Negative Premises 274 Abstract We proposed a syntactic format, the panth format, for structured operational semantics in which besides ordinary transitions also predicates, negated predicates, and negative transitions may occur such that if the rules are stratifiable, strong bisimulation equivalence is a congruence for all the operators that can be defined within the panth format. To show that this format is useful we took some examples from the literature satisfying the panth format but no formats proposed by others. The examples touch upon issues such as priorities, termination, convergence, discrete time, recursion, (infinitary) Hennessy-Milner logic, and universal quantification. CR Classification: D.3.1, F.1.1, F.3.2, F.4.3 Key words: structured operational semantics, term deduction system, transition system specification, strong bisimulation, congruence theorem, predicate, negative premise, negated predicate, stratifiable, stratification. VOLUME 2 NUMBER 3 FALL 1995 C. B. Fraser and R. W. Irving Approximation Algorithms for the Shortest Common Supersequence 303 Abstract It is well known that the problem of finding a shortest common supersequence (SCS) of k strings is NP-hard. In this paper we analyse four natural polynomial-time approximation algorithms for the SCS from the point of view of worst-case performance guarantee, expressed in terms of k. All four algorithms behave badly in the worst case, whether the underlying alphabet is unbounded or of fixed size. For a Tournament style algorithm proposed by Timkovsky, we show that the length of the SCS found is between (k+2)/4 and (3k+2)/8 times the length of the optimal in the worst case. The corresponding lower bound proved for two obvious Greedy algorithms, Greedy1 and Greedy2, is (4k+5)/27 and the corresponding upper bounds are (k+e-1)/e (e = 2.718...) and (k+3)/4 respectively. In the case of the so-called Majority-Merge algorithm of Jiang and Li, no worst-case guarantee beyond the trivial factor of k is possible. Even for a binary alphabet, no constant performance guarantee is possible for any of the four algorithms, in contrast with the guarantee of 2 provided by a trivial algorithm in that case. CR Classification: F.2.2, G.2.1 Key words: shortest common supersequence, string algorithms, approximation algorithms K-L. Chung and Y-W. Chen Mapping Pyramids into 3-D Meshes 326 Abstract Embedding one parallel architecture into another is very important in the area of parallel processing because parallel architectures can vary widely. Given a pyramid architecture of (4^N-1)/3 nodes and height N, this paper presents a mapping method to embed the pyramid architecture into a ((4^{k+1}+2)/3) x 2^{N-1-k} x 2^{N-1-k} mesh for 0 <= k <= N-1. Our method has dilation max {4^k, 2^{N-2-k} }and expansion 1+(2/(4^{k+1}). When k=(N-2)/3, the pyramid can be embedded into a (4^{(N+1)\3}+2)\3 x 2^{(2N-1)/3} x 2^{(2N-1)/3} mesh with dilation 2^{(2N-4)/3} and expansion 1+2/(4^{(N+1)/3}). This result has an optimal expansion when N is sufficiently large and is superior to the previous mapping methods \cite{Ip93} in terms of the dilation and expansion. CR Classification: C.1.2, C.2.1, F.1.2, G.2.2 Key words: dilation, embedding, expansion, parallel architecture, pyramid, 3-dimensional mesh J. C. Godskesen and K. G. Larsen Synthesizing Distinguishing Formulae for Real Time Systems 338 Abstract This paper describes a technique for generating diagnostic information for the timed bisimulation equivalence and the timed simulation preorder. More precisely, given two (parallel) networks of regular real-time processes, the technique will provide a logical formula that differentiates them in case they are not timed (bi)similar. Our method may be seen as an extension of the algorithm by Cerans for deciding timed bisimilarity in that information of time-quantities has been added sufficient for generating distinguishing formulae. CR Classification: D.2.2, D.2.4, F.1.2, F.3.1, F.4.3 Key words: tools and techniques, verification, process algebra, real time M. Walicki and M. Broy Structured Specifications and Implementation of Nondeterministic Data Types 358 Abstract The use of nondeterminism in specifications, as distinct from underspecification, is motivated by an example in the context of data refinement. A simple formalism for specifying nondeterministic data types is introduced. Its semantics is given in terms of the existing formalisms of relations, multialgebras, sets of functions and oracles by means of appropriate translation rules. Nondeterministic data refinement is studied from the syntactic and semantic perspective, and the correctness of the suggested proof obligations is proved. A more general implementation relation and parameterisation of nondeterministic data types are discussed and the standard theorems of vertical and horizontal composition are generalized to the nondeterministic case. CR Classification: D.3.1, F.1.2, F.3.1, F.3.2, I.1.3 Key words: algebraic specification, nondeterminism, data refinement, verification VOLUME 2 NUMBER 4 WINTER 1995 K.-L. Chung Fast Median-Finding on Mesh-Connected Computers with Segmented Buses 397 Abstract Consider a two-dimensional mesh-connected computer with segmented buses (2-MCCSB). A k_1n_1 x k_1n_2 2-MCCSB is constructed from a k_1n_1 x k_1n_2 mesh organization by enhancing the power of each disjoint n_1 x n_2 submesh with multiple buses (sub-2-MCCMB). Given a set of n elements, this paper presents a parallel algorithm for finding the median in O(n^{1/8}\log n) time on an n^{1/2} x n^{1/2} square 2-MCCSB, where each disjoint sub-2-MCCMB is of dimension n^{3/8} x n^{3/8}. This result is competitive with the previous result with time bound of O(n^{1/6}(\log n)^{2/3}) for finding the median on an n^{1/2} x n^{1/2} square 2-MCCMB and our time bound is equal to the previous time bound of O(n^{1/8}\log n) on an n^{5/8} x n^{3/8} rectangular 2-MCCMB. Furthermore, the time bound of our parallel algorithm can be reduced to O(n^{1/10}\log n) time on an n^{3/5} x n^{2/5} rectangular 2-MCCSB, where each disjoint sub-2-MCCMB is of dimension n^{1/2}\times n^{3/10}. We also show that the time bound can be further reduced to O(n^{1/11}\log n) if O(n^{10/11}) processors are used. Our algorithm can be modified to solve the selection problem. CR Classification: C.1.2, F.1.2, F.2.2 Key words: broadcasting, median finding, meshes, meshes with multiple buses, parallel algorithms, selection J. Parrow Interaction Diagrams 407 Abstract Interaction diagrams are graphic representations of concurrent processes with evolving access capabilities; in particular they illustrate the points of access and relations between them. The basic step of computation is the migration of an access point between processes. This paper explains interaction diagrams through a sequence of examples. Diagrams can be regarded as graphic counterparts of terms in the pi-calculus and illuminate some interesting points on its construction. CR Classification: F.1.1, F.1.2, F.3.3 Key words: concurrency, graphic representation, pi-calculus, access migration C. Rick A New Flexible Algorithm for the Longest Common Subsequence Problem 444 Abstract Given two sequences A = a_1a_2...a_m and B = b_1b_2...b_n, m <= n, over some alphabet Sigma of size s, the longest common subsequence problem is to find a sequence of greatest possible length, p, that can be obtained from both A and B by deleting zero or more (not necessarily adjacent) symbols. A new algorithm that is efficient for both short and long longest common subsequences is presented. It also improves on previous methods for longest common subsequences of intermediate length. Thus, it is more flexible and can be used for a wider range of applications than others. The algorithm is based on the well-known paradigm of computing dominant matches and was obtained by observing additional structural properties leading to a kind of dualization. The worst-case running time of the algorithm is O(ns+min{pm,p(n-p)}). Some experimental results which prove the practicability of the new method are given, too. CR Classification: F.2.2, G.2.1 Key words: algorithms, longest common subsequence, string processing, combinatorial pattern matching, computational biology H. L. Bodlaender, T. F. Gonzalez and T. Kloks Complexity Aspects of Two-Dimensional Data Compression 462 Abstract Let M be a 2-dimensional colored map which has been digitized into a large 2-dimensional array (M). We define a class of languages (called {\em rectilinear}) to describe our digitized maps and classify them based on their level of succinct representation. We also study the map compression problem, i.e., the problem of finding for any given map a shortest description within a given language. For one dimensional maps we show that a shortest description can be generated quickly for some languages, but for other languages the problem is NP-hard. We also show that a large number of linear time algorithms for our languages generate map descriptions whose length is at most twice the length of the minimum length description. For all our languages we show that the two dimensional map compression problem is NP-hard. Furthermore, for one of the most succinct of our languages we present evidence suggesting that finding a near-optimal map compression is as difficult as finding an optimal compression. CR Classification: E.4, I.4.2, F.2.2 Key words: data compression, maps, images, computational complexity NOTE: J. Clausen and J. Krarup A Family of Bipartite Cardinality Matching Problems Solvable in O(n^2) Time 496 Abstract For a given, unweighted bipartite graph G with 2n non-isolated vertices, we consider the so-called Bipartite Cardinality Matching Problem (BCMP) for which the time complexity of the fastest exact algorithm available is O(n^{5/2}). We devise a greedy algorithm which either finds a perfect matching in O(n^{2}) time or identifies cycle of length 4 in the complement \bar{G} of G. CR Classification: G.1.6, G.2.2 Key words: bipartite cardinality matching, greedy algorithm Author Index Volume 2 (1995) 503 VOLUME 1 NUMBER 1 SPRING 1994 INVITED PAPERS: K. Fisher, F. Honsell and J.C. Mitchell A Lambda Calculus of Objects and Method Specialization 3 Abstract This paper presents an untyped lambda calculus, extended with object primitives that reflect the capabilities of so-called delegation-based object-oriented languages. A type inference system allows static detection of errors, such as message not understood, while at the same time allowing the type of an inherited method to be specialized to the type of the inheriting object. Type soundness is proved using operational semantics and examples illustrating the expressiveness of the pure calculus are presented. CR Classification: F.3.1, D.3.3, F.4.1 T. Hagerup and M. Maas Generalized Topological Sorting in Linear Time 38 Abstract The generalized topological sorting problem takes as input a positive integer k and a directed, acyclic graph with some vertices labeled by positive integers, and the goal is to label the remaining vertices by positive integers in such a way that each edge leads from a lower- labeled vertex to a higher-labeled vertex, and such that the set of labels used is exactly {1,...,k}. Given a generalized topological sorting problem, we want to compute a solution, if one exists, and also to test the uniqueness of a given solution. The best previous algorithm for the generalized topological sorting problem computes a solution, if one exists, and tests its uniqueness in O(n log log n+m) time on input graphs with n vertices and m edges. We describe improved algorithms that solve both problems in linear time O(n+m). CR Classification: F.2.2, G.2.2 T.F. Melham A Mechanized Theory of the Pi-calculus in HOL 50 Abstract The pi-calculus is a process algebra for modelling concurrent systems in which the pattern of communication between processes may change over time. This paper describes the results of preliminary work on a definitional formal theory of the pi-calculus in higher order logic using the HOL theorem prover. The ultimate goal of this work is to provide practical mechanized support for reasoning with the pi-calculus about applications. CR Classification: F.3.1, F.3.2, D.2.1, D.2.4 H. Mössenböck Extensibility in the Oberon System 77 Abstract We show how an object-oriented system - and in particular the Oberon System - can be used to write software that is extensible by end users even while the software is running. Extensibility instead of completeness may be a way out of the unpleasant situation in software industry where applications still tend to become bigger every year. Oberon is both an object-oriented programming language and an operating system with new concepts such as commands and dynamic loading. The language and the system make up an environment that is similar to Smalltalk in its flexibility but offers static type-checking and is much more efficient. CR Classification: D.2.2, D.1.5 SURVEY: P. Orponen Neural Networks and Complexity Theory 94 Abstract We survey some of the central results in the complexity theory of discrete neural networks, with pointers to the literature. Our main emphasis is on the computational power of various acyclic and cyclic network models, but we also discuss briefly the complexity aspects of synthesizing networks from examples of their behavior. CR Classification: F.1.1, F.1.3 Key words: neural networks, computational complexity, threshold circuits, associative memory REGULAR PAPERS: H.L. Bodlaender, G. Tel and N. Santoro Trade-Offs in Non-Reversing Diameter 111 Abstract Consider a tree T with a number of extra edges (the bridges) added. We consider the notion of diameter, that is obtained by admitting only paths p with the property that for every bridge b in path p, no edge that is on the unique path (in T) between the endpoints of b is also in p or on the unique path between the two endpoints of any other bridge in p. (Such a path is called non-reversing.) We investigate the trade-off between the number of added bridges and the resulting diameter. Upper and lower bounds of the same order are obtained for all diameters of constant size. For the special case where T is a chain we also obtain matching upper and lower bounds for diameters of size alpha(N)+O(1) or of size f(N) where f(N) grows faster than alpha(N). Some applications are given. CR Classification: F.2.2, G.2.2, F.2.3 O. Lysne Heuristics for Completion in Automatic Proofs by Structural Induction 135 Abstract A method for proof by structural induction is studied, and problems of automatizing the method is investigated. We specially consider the equational part of such proofs and we observe that the ability to cope with possibly infinite searches for non-existent equational proofs is crucial. Completion as a means to find an equational proof of equivalence of two given terms is studied. By heuristics we weaken the requirements of completeness on the resulting set, and thereby present modifications of both standard completion and ordered completion which guarantee termination. CR Classification: F.3.1 Key words: Automated Verification, Initial Algebra, Structural Induction J.A. Telle Complexity of Domination-Type Problems in Graphs 157 Abstract Many graph parameters are the optimal value of an objective function over selected subsets S of vertices with some constraint on how many selected neighbors vertices in S, and vertices not in S, can have. Classic examples are minimum dominating set and maximum independent set. We give a characterization of these graph parameters that unifies their definitions, facilitates their common algorithmic treatment and allows for their uniform complexity classification. This characterization provides the basis for a taxonomy of domination-type and independence-type problems. We investigate the computational complexity of problems within this taxonomy, identify classes of NP-complete problems and classes of problems solvable in polynomial time. CR Classification: F.2.2, G.2.2 VOLUME 1 NUMBER 2 SUMMER 1994 REGULAR PAPERS: J.R.S. Blair and B.W. Peyton On Finding Minimum-Diameter Clique Trees 173 Abstract A clique-tree representation of a chordal graph often reduces the size of the data structure needed to store the graph, permitting the use of extremely efficient algorithms that take advantage of the compactness of the representation. Since some chordal graphs have many distinct clique-tree representations, it is interesting to consider which one is most desirable under various circumstances. A clique tree of minimum diameter (or height) is sometimes a natural candidate when choosing clique trees to be processed in a parallel-computing environment. This paper introduces a linear-time algorithm for computing a minimum-diameter clique tree. CR Classification: F.2.2, G.2.2 Key words: chordal graphs, clique trees, acyclic hypergraphs, parallel computing K. Coolsaet, V. Fack and H. De Meyer A Tabular Method for Verification of Data Exchange Algorithms on Networks of Parallel Processors 202 Abstract A tabular method for verification of data exchange algorithms on networks which possess a certain symmetry (whose underlying graph is a Cayley graph) is given. The algorithms use no intermediate buffering of messages. To illustrate this method, optimal total exchange (i.e., all-to-all personalised) algorithms are given for several much-used processor configurations, such as ring networks, the hypercube and symmetric meshes with wrap-around (two and three-dimensional). To the best of our knowledge the latter are new. CR Classification: F.2.2, G.2.2 Key words: total exchange, optimal algorithm, hypercube, symmetric meshes with wrap-around S. Jajodia, R. Mukkamala and K.V.S. Ramarao A View-based Dynamic Replication Control Algorithm 214 Abstract Many algorithms exist in literature to manage replicated database objects. Some of these are dynamic and attempt to adapt to changing network configurations due to failures, particularly due to network partitioning. This paper presents a new dynamic algorithm for replication control with several desirable features: not only does it enhance the availability of read and write operations in failure conditions but also achieves this at a relatively low cost. This algorithm is based on the concept of views introduced originally by El Abbadi, Skeen and Cristian and improves on the dynamic voting ideas developed by many authors. CR Classification: D.4.3, H.2.4 R. Janardan and F.P. Preparata Widest-Corridor Problems 231 Abstract A k-dense corridor through a finite set, S, of n points in the plane is the open region of the plane that is bounded by two parallel lines that intersect the convex hull of S and such that the region contains k points of S. The problem of finding a widest k-dense corridor arises in robot motion-planning. In this paper, efficient solutions are presented for several versions of this problem. Results include: two algorithms for finding widest k-dense corridors for any k, an algorithm to dynamically maintain a widest empty corridor under online insertions and deletions in S, an algorithm to find a widest (n-1)-dense closed corridor, and a widest empty corridor algorithm for polygonal obstacles. The techniques used are based on geometric duality and on efficient searching in the convex layers of a point-set. CR Classification: E.1, F.2.2, I.2.9 Key words: arrangement, computational geometry, convex layers, data structures, geometric duality U. Pferschy, R. Rudolf and G.J. Woeginger Some Geometric Clustering Problems 246 Abstract This paper investigates the computational complexity of several clustering problems with special objective functions for point sets in the Euclidean plane. Our strongest negative result is that clustering a set of 3k points in the plane into k triangles with minimum total circumference is NP-hard. On the other hand, we identify several special cases that are solvable in polynomial time due to the special structure of their optimal solutions: The clustering of points on a convex hull into triangles; the clustering into equal-sized subsets of points on a line or on a circle with special objective functions; the clustering with minimal clusterdistances. Furthermore, we investigate clustering of planar point sets into convex quadrilaterals. CR Classification: F.2.2 NOTES: K. Jansen Integral Flow with Disjoint Bundles 264 Abstract It is known by Sahni that the integral flow problem with overlapped bundles is NP-complete. We show that this problem remains NP-complete even if the bundles are disjoint. J.W. Krussel and B.F. Schaudt A Note on Higher Order Voronoi Diagrams 268 Abstract In this note we prove some facts about the number of segments of a bisector of two sites that are used in a higher order Voronoi diagram. CR Classification: F.2.1 Key words: computational geometry, Voronoi diagrams VOLUME 1 NUMBER 3 FALL 1994 SELECTED PAPERS OF THE 20TH INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING (ICALP93), JULY 5-9, 1993: A. Lingas Editor's Foreword 273 M.V. Marathe, H.B. Hunt III, S.S. Ravi The Complexity of Approximation PSPACE-Complete Problems for Hierarchical Specifications 275 Abstract We extend the concept of polynomial time approximation algorithms to apply to problems for hierarchically specified graphs, many of which are PSPACE- complete. We present polynomial time approximation algorithms for several standard PSPACE-hard problems considered in the literature. In contrast, we prove that finding epsilon-approximations for any epsilon > 0, for several other problems when the instances are specified hierarchically, is PSPACE-hard. We present polynomial time approximation algorithms for the following problems when the graphs are specified hierarchically: minimum vertex cover, maximum 3SAT, weighted max cut, minimum maximal matching, and bounded degree maximum independent set. In contrast, we show that for any epsilon > 0, obtaining epsilon- approximations for the following problems when the instances are specified hierarchically is PSPACE-hard: the number of true gates in a monotone acyclic circuit when all input values are specified and the optimal value of the objective function of a linear program. It is also shown that obtaining a performance guarantee of less than 2 is PSPACE-hard for the following problems when the instances are specified hierarchically: high degree subgraph, k-vertex connected subgraph and k-edge connected subgraph. Key words: hierarchical specifications, approximation algorithms, computational complexity, algorithms and data structures V. Kann Polynomially Bounded Minimization Problems that are Hard to Approximate 317 Abstract MIN PB is the class of minimization problems whose objective functions are bounded by a polynomial in the size of the input. We show that there exist several problems that are MIN PB-complete with respect to an approximation preserving reduction. These problems are very hard to approximate; in polynomial time they cannot be approximated within n^epsilon for some epsilon > 0, where n is the size of the input, provided that P =/ NP. In particular, the problem of finding the minimum independent dominating set in a graph, the problem of satisfying a 3-SAT formula setting the least number of variables to one, and the minimum bounded 0-1 programming problem are shown to be MIN PB-complete. We also present a new type of approximation preserving reduction that is designed for problems whose approximability is expressed as a function of some size parameter. Using this reduction we obtain good lower bounds on the approximability of the treated problems. CR Classification: F.1.3, F.2.2, G.2.2 Key words: approximation, reducibility, completeness, graph problems B.S. Chlebus, K. Diks, A. Pelc Sparse Networks Supporting Efficient Reliable Broadcasting 332 Abstract Broadcasting concerns transmitting information from a node of a communication network to all other nodes. We consider this problem assuming that links and nodes of the network fail independently with given probabilities p < 1 and q < 1, respectively. For a positive constant epsilon, broadcasting in an n-node network is said to be epsilon-safe, if source information is transmitted to all fault-free nodes with probability at least 1-n^-epsilon. For any p < 1, q < 1 and epsilon > 0 we show a class of n-node networks with maximum degree O(log n) and epsilon-safe broadcasting algorithms for such networks working in logarithmic time. K. Havelund and K.G. Larsen The Fork Calculus 346 Abstract The Fork Calculus FC presents a theory of communicating systems in family with CCS, but it differs in the way that processes are put in parallel. In CCS there is a binary parallel operator |, and two processes p and q are put in parallel by p|q. In FC there is a unary fork operator, and a process p is activated to ``run in parallel with the rest of the program'' by fork(p). An operational semantics is defined, and a congruence relation between processes is suggested. In addition, a sound and complete axiomatisation of the congruence is provided. FC has been developed during an investigation of the programming language CML, an extension of ML with concurrency primitives, amongst them a fork operator. Key words: process calculi, process creation, bisimulation, axiomatisation CR Classification: D.3.1, D.3.3, F.3.1, F.3.2 H. Hungar and B. Steffen Local Model-Checking for Context-Free Processes 364 Abstract We present a local model checking algorithm that decides for a given context-free process whether it satisfies a property written in the alternation-free modal mu-calculus. Heart of this algorithm is a purely syntactical sound and complete formal system, which in contrast to the known tableaux techniques, uses intermediate second-order assertions. These assertions provide a finite representation of all the infinite state sets which may arise during the proof in terms of the finite representation of the context-free argument process. This is the key to the effectiveness of our local model checking procedure. CR Classification: F.3.1, D.2.4 VOLUME 1 NUMBER 4 WINTER 1994 SELECTED PAPERS OF THE 4TH SCANDINAVIAN WORKSHOP ON ALGORITHM THEORY (SWAT94), JULY 6-8, 1994: S. Skyum Guest Editor's Foreword 387 P. Becker A New Algorithm for the Construction of Optimal B-Trees 389 Abstract In this paper the construction of optimal B-trees for n keys, n key weights, and n+1 gap weights, is investigated. The best algorithms up to now have running time O(k n^3 log n), where k is the order of the B-tree. These algorithms are based on dynamic programming and use step by step construction of larger trees from optimal smaller trees. We present a new algorithm, which has running time O(k n^alpha ), with alpha = 2 + log 2 / log (k+1). This is a substantial improvement to the former algorithms. The improvement is achieved by applying a different dynamic programming paradigm. Instead of step by step construction from smaller subtrees a decision model is used, where the keys are placed by a sequential decision process in such a way into the tree, that the costs become optimal and the B-tree constraints are valid. CR Classification: E.1, H.2.2, I.2.8 Key words: B-Tree, optimization, dynamic programming E. Schenk Parallel Dynamic Lowest Common Ancestors 402 Abstract This paper gives a CREW PRAM algorithm for the problem of finding lowest common ancestors in a forest under the insertion of leaves and roots and the deletion of leaves. For a forest with a maximum of n vertices, the algorithm takes O(m/p + rlog p + min(m,rlog n)) time and O(n) space using p processors to process a sequence of m operations that are presented over r rounds. Furthermore, lowest common ancestor queries can be done in worst case constant time using a single processor. For one processor, the algorithm matches the bounds achieved by the best sequential algorithm known. CR Classification: E.1 Key words: data structures G. Das, P.J. Heffernan and G. Narasimhan Finding All Weakly-Visible Chords of a Polygon in Linear Time 433 Abstract A chord of a simple polygon P is weakly-visible, if every point on P is visible from some point on the chord. We give an optimal linear-time algorithm which computes all weakly-visible chords of a simple polygon P with n vertices. Previous algorithms for the problem run in O(n log n) time, and only compute a single weakly-visible chord, if one exists. CR Classification: F.2.2 Key words: computational geometry, visibility, polygons, chords S. Schuierer An O(log log n) Time Algorithm to Compute the Kernel of a Polygon 458 Abstract The kernel of a polygon P is the set of all points that see the interior of P. It can be computed as the intersection of the halfplanes that are to the left of the edges of P. We present an O(loglog n) time CRCW-PRAM algorithm using n/loglog n processors to compute a representation of the kernel of P that allows to answer point containment and line intersection queries efficiently. Our approach is based on computing a subsequence of the edges that are sorted by slope and contain the ``relevant'' edges for the kernel computation. CR Classification: F.2.2 Key words: computational geometry, visibility, polygons, kernel M. M. Halldórsson and J. Radhakrishnan Improved Approximations of Independent Sets in Bounded- Degree Graphs via Subgraph Removal 475 Abstract Finding maximum independent sets in graphs with bounded maximum degree Delta is a well-studied NP-complete problem. We introduce an algorithm schema for improving the approximation of algorithms for this problem, which is based on preprocessing the input by removing cliques. We give an implementation of a theorem on the independence number of clique-free graphs, and use it to obtain an O(Delta/loglog Delta) performance ratio with our schema. This is the first o(Delta) ratio for the independent set problem. We also obtain an efficient method with a Delta/6 (1 + o(1)) performance ratio, improving on the best performance ratio known for intermediate values of Delta. CR Classification: F.2.2, G.2.2 Key words: analysis of algorithms, approximation algorithms, independent sets M. Peinado Hard Graphs for the Randomized Boppana-Halldórsson Algorithm for MAXCLIQUE 493 Abstract A randomized version of the MAXCLIQUE approximation algorithm by Boppana and Halldórsson is analyzed. The Boppana Halldórsson algorithm has the best performance guarantee currently known for the MAXCLIQUE problem. This paper presents a class of graphs on which the performance ratio of the randomized version of the algorithm is not better than Omega(sqrt(n)) with probability greater than 1-1/n^(omega(1)). CR Classification: F.2.2, G.2.1, G.2.2, G.3 Key words: approximation algorithms, MAXCLIQUE, randomization Author Index Volume 1 (1994) 517