Accepted Papers with Abstracts

Tero Hietanen and Manne Tervaskanto. DigiAuto - Automaatiotekniikan koulutuskeskus

Abstract: Oululaiset automaatiotekniikan koulutusorganisaatiot kehittävät alueelle uutta keskitettyä digitaalista oppimisympäristöä ja uudenlaista toimintamallia automaatiokoulutukseen. Yhteiskäyttöön tuleva ympäristö kehitetään eakr rahoitteisena investointihankkeena vuosina 2017-2018. Hankkeen budjetti on 735ke, josta Eakr 70%, oma rahoitus 20-25% sekä ulkopuolinen rahoitus 5-10%. 80% budjetista käytetään investointeihin ja loput 20% suunnitteluun ja toteutukseen.
Hankkeessa aktiivisina toimijoina ovat Oulun ammattikorkeakoulu (OAMK), Oulun seudun ammattiopisto (OSAO), Oulun aikuiskoulutuskeskus (OAKK) sekä Oulun yliopisto. Hankkeen yksityisiä rahoittajia ovat Suomen automaatioseura, Oulun teollisuuden ammattioppilaitos säätiö, Suomen mittaus- ja säätöteknillinen yhdistys, sekä 6 yritysrahoittajaa. Lisäksi useat yritykset antavat laitelahjoituksia sekä tukea hankkeen ohjaukseen, kuten esim. teknologiateollisuus.
Hankkeessa kehitetään uusia ajanmukaisia oppimisympäristöjä, jotka tukevat tulevaisuuden prosessi-, tuotanto- ja rakennusautomaation sekä automaation informaatiojärjestelmien koulutusta. Luonnollisesti samoja resursseja hyödynnetään sekä kunnossapito- että sähkötekniikan koulutuksessa. Eri oppimisympäristöjen kehityksessä korostuvat mm. automaation palvelutoiminta, IOT, turvallisuus sekä tietojärjestelmät.


Timo Malm, Timo Salmi, Ilari Marstio and Jari Montonen. Safe collaboration of operators and industrial robots

Abstract: International safety requirements for industrial robots were published already 1992 (EN 775), which means that there is already a long tradition for safety requirements of the robots. Basically, the operator stays outside of the safeguarded area during automatic run, but during teaching, the operator may be beside a slow moving robot. The old standard does not mention collaboration of humans and robots, but the idea is to keep them separate. The current robot safety standards (ISO 10218-1:2011 and ISO 10218-2:2011) define collaboration modes of humans and robots. The collaboration is defined more specifically in the first edition of “ISO/TS 15066 Robots and robotic devices — Collaborative robots”. It was published at February 2016. The technical specification defines requirements, especially, to light-weight collaborative robots.

This paper is related to industrial robots, which in specific conditions can have collaborative mode. The idea of the safety system is that robot stops before it can hit a person. The new technologies enable flexible fenceless safety systems and dynamic safety regions alongside a host of other attractive features for human-robot co-operation. One challenge related modern industrial robots is that it takes long time and distance to stop a robot. Thirty years ago a typical maximum speed for a robot was 3 m/s and stopping distance was 40 cm for emergency stop and 90 cm for servo stop. Currently typical maximum speed for a robot is 5 m/s and also capacity and outreach are higher and then stopping distance (servo stop) can be 2 m. 2 meters is too much to stop a robot. This means almost half of the entire working area of the robot. Practical solution to reduce stopping distance is to reduce speed. For example, ABB IRB 4600, with 21.8 kg load and speed (TCP) 2500mm/s has stopping distance (servo stop) 65 cm, which is more convenient than 2 m. As the speed slows down the stopping distance drops dramatically. Emergency stop is quicker than servo stop, since it initiates quick braking and cuts servo power. However, it does not provide quick start-up, which is required in human-robot collaboration. Therefore, emergency stop is reserved only to emergency and failure situations.

The basic factors in separation control of a human and industrial robot are robot stopping distance, all delays related sensors and controls, human walking speed and reach through sensors towards the robot. Then also functional safety of the entire separation control is relevant. The performance level requirement for safety systems is PL d. If the safety system detects a malfunction or slow initiation of a safety function an emergency stop function is initiated.

VTT has designed Dynamic Safety System for robots to enable close collaboration of human and an industrial robot. The aim was to build the system according to the current safety regulations. The other key principle was to avoid unnecessary emergency stops after human attention. The safety system consists of (for example) two safety laser scanners, safety controller (SafeMove, ABB) and two Kinect camera sensors. Kinect is not a safety device and here it only helps to select correct safety mode. The idea is that when a person is not near the robot, the robot can move with high speed and the safety area (stops the robot) is large. When Kinect detects a person the robot speed is reduced and the safety area is smaller. When a person walks to the collaborative area so that the robot speed is reduced according to the separation distance, the robot doesn’t have to stop. If the separation distance is too small, the robot stops, but it can restart as soon as the separation distance is adequate again. It is relatively complex task to specify safety areas and therefore specification tool is made to configure all areas and modes of the robot. For example, laser scanner has 16 predefined safety areas and SafeMove different monitored speeds, safety-rated monitored stop and emergency stop. The combination of the areas can be quite large and therefore a manmade configuration without a specific tool can be risky.

There is a clear need human-robot collaboration. The collaboration enables the utilization of best sides of humans and the robots. Some tasks can be made with light-weight collaborative robots, but there will also be tasks where the power of an industrial robot is required. This is the case when dynamic safety systems are needed to ensure safety.

Mike Mekkanen and Kimmo Kauhaniemi. Wireless Light-Weight IEC 61850 Based Loss of Mains Protection for Smart Grid

Abstract: This paper presents a novel Loss of Mains (LoM) protection method based on IEC 61850 Manufacturing Messages Spcification (MMS) protocol. LoM or anti-islanding protection is a key requirement in modern power distribution grids where there is significant amount of distributed energy resources (DER) . The future Smart Grids are based on extensive communication capabilities and thus the communication based LoM approaches will became also dominant. The IEC 61850 standard based systems are gaining ground in the substation communication, and therefore it is natural to expand this technology deeper into the distribution network. Using this standard for LoM protection enables also some adcanced approaches utilizing large variety of information availabale in the Smart Grid. There is a specific part of the standard, IEC 61850-7-420, which defines logical nodes (LNs) suitable for this purpose but there are not yet available devices applying this part of the standard. In this research, a light-weight implementation of IEDs (Intelligent Electronic Devices) is developed using a low-cost open microcontroller platform, BeagleBone, and applying available open source software. Using this platform a wireless LoM solution basing on IEC 61850 MMS protocol has been developed and demonstrated. This paper introduces object modeling according to IEC 61850-7-420 defined LNs and an implementation applying direct clients-server MMS based communication between light-weight IEDs. The performance of the wireless application using the developed platform is demonstrated by measuring the message latencies.

In this paper, a novel LoM protection concept has been proposed upon the standardized communication solution brought by IEC 61850 and specific LNs for DERs defined in IEC 61850-7-420. A light-weight implementation of an IEC 61850 based IED is developed in order to reduce large overhead information and complexity of the standard. In addition to LoM function, the developed solution has the ability to monitor DERs status. The available monitoring information can be shared among the various distributed DERs unit controllers, enabling distributed decision approach for various purposes.


Antti-Juhana Mäki, Tomi Ryynänen, Jarmo Verho, Joose Kreutzer, Jukka Lekkala and Pasi Kallio. Matemaattisen lämpötilaestimointimallin ja epäsuoran mittauksen käyttäminen soluympäristön lämpötilan ylläpidossa ja säädössä

Abstract: Ihmisperäisten solujen optimaalisessa kasvatuksessa solun fysiologisten ympäristötekijöiden, kuten happitoisuuden ja lämpötilan, tarkka kontrollointi on tärkeää. Siitä huolimatta perinteisesti solut kasvatetaan isossa inkubaattorissa monen muun soluviljelmän kanssa, jolloin solutason ympäristötekijöiden kontrollointi ei onnistu. Siksi olemme kehittämässä millimetriluokan solukasvatusjärjestelmää, jossa integroimme kaasusyötön ja lämpötilan tarkan kontrolloinnin solutason optimaalisen kasvatusympäristön luomiseksi. Ongelmana suorassa lämpötilanmittauksessa solualueelta on se, että mittaus voi häiritä soluja ja estää tärkeän solutarkastelun optisella mikroskoopilla sekä anturin hankalamman asentamisen solualueelle verrattuna ulkopuoliseen kohtaan järjestelmässä. Tästä syystä olemme kehittäneet järjestelmän, jossa lämpötilan säätö pohjautuu epäsuoralla mittauksella estimoituun solualueen lämpötilaan. Järjestelmässämme lämpötilan estimaatti saadaan yhdistämällä solualueen ulkopuolinen lämpömittaus ja työssä kehitelty matemaattisen malli. Osoitamme, että järjestelmä pystyy ylläpitämään solualueen lämpötilan 37°C noin 0.4°C tarkkuudella yli 15 tuntia.


Merja Mäkelä, Mirva Pilli-Sihvola and Matti Strengell. Empowering the Learning of Energy Automation

Abstract: Automation and energy technology projects could be kept profitable when tools and working methods are improved. New tools of digitalization are commissioned and novel working methods are required in many kinds of engineering, development and business projects. A lot of work takes place in communication networks and cloud services. In the long term the learning contents of energy technology and automation should reflect working life’s needs. There are several driving forces asking for major changes in the engineering studies of public higher education. Our working life is in a continual change. Lifelong learning will be more and more a parallel phase in careers, in its literal meaning. At the same time, learning should be easily available, more entertaining and realizable with decreasing public funding. Information flow is enormous, plagiarism is an everyday problem. More flexibility is called for in the teaching and learning arrangements of higher education. This paper will present three practical approaches how to get more efficiency in the education of energy automation. The learning experiences, advantages and limitations of used tools and methods will be discussed. The authors recommend the increasing of collaborative course design, flexible lectures and laboratory workshops realized as distance-learning and more innovative and entertaining documentation methods both for students and teachers. https://www.powtoon.com/online-presentation/fVlXnqZLWZc/empoweringlearning/


Hans Aalto. Real-time Optimization of Tall Oil Distillation using Model-Predictive Control

Abstract:
Timo Saarenko, Forchem Oyj; Jaana Valtanen, Forchem Oyj; Hans Aalto, Neste Jacobs Oy; Stefan Tötterman, Neste Jacobs Oy

ABSTRACT

Crude tall oil is a by-product from pulp mills and can be further processed in tall oil distillation plants which provide valuable bio-based chemical products of which the most important ones are rosin and fatty acids. Tall oil distillation is normally done in plants consisting of three to four vacuum distillation columns because tall oil cannot be excessively heated up in the distillation process while undesirable side-reactions will occur which decrease the product yields. Another challenge in the process are the overlapping boiling ranges of the chemical compounds involved.
Forchem Oyj in Rauma, Finland had recently initiated a development program which targeted to 200 000 tons per year production by investments of low capital expenditure. One of those investments was an Advanced Process Control (APC) package which would be able to continuously run the process at an optimum operating point. The APC package chosen is a multivariable model-predictive control (MPC) solution with optimizing features of which the most important one is the ability to push the crude tall oil feed towards maximal values while respecting multiple dynamically varying process constraints. The MPC solution uses measured product quality variables provided by on-line near infrared (NIR) analyzers which are known to slowly drift and to sometimes give faulty readings. The MPC solution is therefore backed up with an Automatic Calibration application which connects to Forchem's Laboratory Information Management System and optimally adjusts the NIR readings with the help of laboratory-measured product quality data and removes NIR analyser outliers if they occur. The MPC application is equipped with product grade change logic because two types of rosin is produced.
The MPC application package was in guarantee runs demonstrated to be capable of keeping the production at an 8% higher level that is reasonably achievable before the MPC installation without any significant product yield degradation - on the contrary, the yield of fatty acids has been increased by around 1.5 % units. The operation of the plant is more stable than before, and the work of the control room operators has become much easier than before the MPC installation.
The paper is organised as follows: first the process is presented together with the client's expectations towards the real-time optimization and control. The control and optimization strategy is presented including some detail highlights such as production maximization, product yield maximization and energy minimization in column reboiler hot oil system at times when maximal production is not required. The MPC implementation projects steps and timeline are roughly outlined and finally project results are presented.

 

Alexander Smirnov, Teemu Sillanpää, Toni Hartikainen, Olli Pyrhönen and Sirkka-Liisa Jämsä-Jounela. Control Strategy for Supercritical Generator with Magnetic Bearings in Small Gas Turbine Application

Abstract: With the global trend to more distributed power generation and possibility to utilize different fuel types the microturbines and small gas turbines present a new valuable solution. These turbines provide through co-generation both electricity and heat. To keep the system efficient, the high-speed generator with direct connection to the process should be utilized in such systems. The electrical machine operation speed range can be extended to supercritical region by the application of magnetic bearings. In this article the control strategy for active magnetic bearing system that allows to operate above the first flexile mode of the rotor is discussed. The separation into several control layers is presented and features of each of them are denoted.


Lauri Haapanen, Markus Sintonen and Andreas Frejborg. OPC UA More Than a Communication Protocol

Abstract: This paper describes how OPC UA provides rich view and flexible access to the industrial data. OPC UA server acts as a graph database which is well suited for creating highly versatile information models. This graph is based on nodes and relationships between them creating a network of nodes. The graph database has a flexible set of services for obtaining information about the structure of the graph. OPC UA standardizes the services for querying information from the contents of the server. These services allow querying the information from different perspectives. For example user can acquire hierarchical structure of the information model as well as to extract information from the graph as a whole. Also a subset of the graph can be queried based on a certain structure of the graph. This kind of graph like structure creates new possibilities for modelling and accessing the complex information sets that e.g. process industries commonly face. For example one could extract all of the pressure measurements from a certain type of process equipment based on a topological UA model of a plant. This brings new opportunities for creating intelligent software and services utilizing not only the process measurements but also the structural information contained inside the UA graph. UA information models bring structure to the otherwise unstructured and hard to interpreted data that traditional key-value pair automation databases offer. OPC UA also standardizes how the information is represented which allows easier interpretation of the data compared to traditional proprietary automation systems. Standards exists for simple measurements all the way up to complex devices for example analyzers and other industrial equipment. This makes it possible to create new services based on the plant information which has been previously impossible due to proprietary nature of the automation systems. - This brings digitalization to the plant level.


Markus Pulliainen. IoT ja automaatiopäivät

Abstract: Tiivistelmä

Tämä tiivistelmä on laadittu automaatiopäiviä 23.3.2017-24.3.2017 varten ja tässä tekstissä keskiössä on Pro Gradu -tutkielman aihe, eli IoT -ansaintalogiikan määrittäminen B2B -liiketoiminnassa. Aiheen valinta pohjautui sille, että oli tarve määrittää IoT -liiketoimintamallien ansaintalogiikat sekä ekosysteemin rakenne. Lisäksi IoT on osa digitalisaatiota ja automaation kehitystä. Tutkimuksessa hahmottui neljä eri osa-aluetta, jotka olivat ansaintalogiikat, ekosysteemin rakenne, asiakkaat sekä Atos Oy jolle tutkimus laadittiin.

Tutkimuskysymykset ja tulokset

Tutkimuskysymyksistä ensimmäinen liittyi siihen, että, mitkä ovat toimivan ekosysteemin piirteet sekä, mikä on ansaintalogiikan ja ekosysteemin välinen suhde. Toinen kysymys liittyi siihen, että miten määritetään IoT -ratkaisu asiakkaan näkökulmasta, jonka avulla on mahdollista optimoida ja maksimoida hyödyt sekä minimoida riskit. Tutkimuksen kolmas kysymys pyrki vastaamaan siihen, miten syntyvä arvonlupaus muutetaan rahaksi.
Tutkimuksen perusteella havaittiin, että ekosysteemin ja ansaintalogiikan välinen suhde riippuu vallitsevan ekosysteemin rakenteesta sekä liiketoimintamallista, mutta lisäksi tuli esille, että kaikki ansaintalogiikat voivat olla sopivia kaikille ekosysteemeille. Ekosysteemin, liiketoimintamallin ja ansaintalogiikan välinen suhde tulisi nähdä horisontaalisesti etenevänä prosessina, jossa on mahdollista tehdä strategisia muutoksia jokaiseen ominaisuuteen tarvittaessa. Eräs esimerkki, joka tavallaan muuttaa asiakkaan suhtautumista totaalisesti riskien ja tuottojen puntaroinnissa on kilpailijoiden tuotteisiin ja palveluihin vastaaminen.
Arvonlupaus voidaan saavuttaa yrityksen näkökulmasta kolmella tavalla. Näistä kaksi on rinnastettavissa taktisten tavoitteiden toteuttamiseen, eli siihen, että saavutetaan IoT:n käyttöönottamisen näkökulmasta tavoitellut hyödyt. Tästä hyvä esimerkki on se, että pystytään vaikuttamaan kustannusten optimointiin tai hallita tilausten ja toimitusten ennustettavuutta. Tässä normaali sensoritieto ja KPI -arvot hyödyntävät huomattavasti.
Kolmas arvonlupaus perustuu strategisten tavoitteiden saavuttamiseen, eli siihen, että onnistutaan luomaan uutta arvoa asiakkaalle. Tästä hyvänä esimerkkinä on älykäs auto tai Niken älykäs juoksukenkä. Ajatuksena näissä on, että tuote asiakkaan käytössä muuttaa tuotteen luonnetta ja asiakkaan suhtautumista tuotteeseen sen luoman lisäarvon perusteella.

Automaatiopäivät ja IoT

IoT tulisi nähdä oleellisena osana automaation kehityksessä ja digitalisaation kokonaisvaltaisen läpiviennin mahdollistamisessa. Automaatiopäivät ovat yhteydessä IoT:hen etätunnistamisen sekä teollisuus 4.0:n tulevaisuuden näkökulmasta. Prosessien automaatio, laadullinen datan laskenta sekä taloudellisen tuloksen optimointi, simulointi sekä riskien hallinta ovat osa-alueita, joita voidaan analysoida IoT:n ansaintalogiikoiden ja ekosysteemin luonteen ja rakenteen näkökulmasta. Esitelmä automaatiopäiville on tarkoitus laatia teollisuuspaperin pohjalta.

Lisätutkimusta tarvitaan standardien määrittelyssä, tietoturvakysymyksissä sekä tiedon laadullisessa määrittelyssä. Ekosysteemin luonne ja sen sidosryhmien merkitys ovat myös vahvasti esillä näissä kysymyksissä.


Martin Ollus and Björn Wahlström. Professori Hans Blombergin rooli Suomen automaatiotutkimuksen ja –opetuksen kehittäjänä

Abstract: Tausta
Suomen teollisuuden menestys sotien jälkeen perustuu paljolti prosessien tehokkaaseen käyttöön, johon myös kuuluu kehittyneen automaation käyttö. Professori Hans Blomberg (1919 – 2006) oli alueen keskeinen tutkija ja opettaja, joka huomattavasti myötävaikutti tuotantojärjestelmien hallinnan periaatteiden ja menetelmien ymmärtämiseen ja käyttöönottoon. Samalla hän koulutti ison määrän tutkijoita sekä teollisuudessa ja elinkeinoelämässä vaikuttaneita ammattilaisia.
Hans Blomberg oli opetusmenetelmien uudistamisen edelläkävijä ja oli esimerkiksi teknillisen korkeakoulun ensimmäisiä professoreita, jotka eivät vaatineet läsnäoloa luennoilla. Samoin hän käynnisti ensimmäisten professorien joukossa systemaattisia tohtoriopintoja. Hänen ohjauksessaan valmistui 22 tohtoria ja hänen laboratoriossa työskennelleistä henkilöistä 15 on nimitetty professoreiksi eri korkeakouluissa ja tutkimuslaitoksissa. Korkeakouluverkosto laajeni voimakkaasti 1960-luvulla, mm. perustettiin teknillisiä korkeakouluja Tampereelle (1965) ja Lappeenrantaan (1969). Blombergin oppilaat siirtyivät perustettuihin uusiin virkoihin. Voidaan perustellusti sanoa, että Blomberg tai hänen oppilaansa ovat kouluttaneet suurimman osan esimerkiksi tämän päivän teollisuuden aktiivisista automaatioinsinööreistä. Blombergin omien oppilaiden nyt eläköityessä heidän kouluttamansa ”kolmas sukupolvi” on jatkanut työtä automaatioalan eteenpäin viemistä.
Blombergin omat tutkimusaiheet olivat lähinnä teoreettisia ja matemaattisia. Hänen perusajatuksensa oli kuitenkin, että menetelmäkehityksen lähtökohtana on käytännöllisiä ongelmia, joihin etsitään kestäviä ja hyvin perusteltuja ratkaisuja. Sen takia tehtiin myös sovellettua tutkimustyötä hänen johdollaan. Hän perehtyi itse ongelmiin ja toi esiin ratkaisuehdotuksia. Työn perusteellisuus ja tarkkuus johti usein uusiin innovatiivisiin ratkaisuihin. Blombergin laboratorioon hankittiin 1968 tietokone, joka soveltui teollisuusprosessien säätöön ja jota käytettiin tutkimuksessa ja opetuksessa. Blomberg oli myös aktiivinen, kun hankittiin hybriditietokone, jonka avulla tutkittiin ja kehitettiin Suomen ensimmäisten ydinvoimaloiden hallintaa ja säätöä. Valtion Teknillisen Tutkimuslaitoksen (VTT) ensimmäisenä johtajana ja myöhemmin sen valtuuskunnan jäsenenä Blomberg osallistui aktiivisesti alansa soveltavan tutkimuksen kehittämiseen. VTT:n automaatiotutkimus on suurelta osin saanut alkunsa töistä, jotka Blomberg ja hänen oppilaansa aloittivat ja toteuttivat.
Paperi
Paperi perustuu alkuvuonna 2017 ilmestyvään ruotsinkieliseen artikkeliin (Ollus & Wahlstöm, 2017), jossa kuvataan professori Blombergia ja hänen elämäntyötään. Automaatiopäivien paperissa keskitytään automaatioalan opetus- ja tutkimustoimintaan sekä sen käytännöllisiin sovelluksiin. Esitellään tärkeimpiä tutkimusaiheita ja teemoja sekä pohdiskellaan Blombergin työtä ja merkitystä laajemmassa historiallisessa kontekstissa. Blomberg nimitettiin professoriksi 1956. Toisen maailmansodan jälkeen oli julkaistu useita uusia ideoita liittyen optimointiin ja systeemien mallintamiseen. Blomberg omaksui nämä nopeasti ja käynnisti tutkimustyötä niiden ympärillä. Tietokoneiden ja tietotekniikan nopea kehitys samoihin aikoihin mahdollisti samalla käytännön sovellyuksia.
Lähde
Ollus Martin, Wahlström Björn, ”Professor Hans Blomberg - Pionjären inom den finländska automationsforskningen och –utbildningen”, Serien Finlandssvenska tekniker, Tekniska Föreningen i Finland, 2017.

 

Abstract: A significant and ever increasing portion of today's global electrical energy production is being consumed in data processing and transmission. With the continuous growth of cloud computing, data centers and the upcoming internet of things, the energy consumed by those applications is projected by many experts to continue to considerably increase in the future.

FPGAs offer the flexibility to implement reconfigurable and application tailored digital hardware with reduced energy consumption compared to general purpose CPUs. Furthermore, FPGAs massive parallelism and computational power makes it possible to implement powerful online local data processing already on raw data and thus in addition to processing also reduce the energy wasted in transmission.

Intel's new MAX10 FPGAs offer some of the highest logic density on non volatile FPGA available on the market today. This allows the implementation of Systems on Programmable Chip that can perform complex DSP applications while still be powered by batteries. Accurate simulations of energy consumption are crucial for designing efficient implementations. Intel claims an accuracy of ±20% when using Quartus2 power play tool with simulation traces to estimate the FPGA's power consumption.

In this work we implement and simulate three DSP applications, FFT for audio signals, RGB to HSV color space conversion for video streams and GA for solving an NP-Hard optimization problem. Simulation traces are exported from modelsim to Quartus2 and used to estimate power consumption for each of the three applications. Simulation results are then compared to actual measurements obtained from a power monitoring chip that measures the voltage and current of Intel's 10M50DAF484C6G FPGA on Arrow's DECA development board. Finally, performance per watt is compared when running the GA and FFT on a TI beagle bone CPU, versus a beagle bone with an FPGA as a hardware accelerator.

Heikki Hyyti and Tuomo Palonen. Point Cloud Processing for an Augmented Reality User Interface

Abstract: Augmented reality human machine interface is demonstrated in the cabin of a forest machine outdoors for the first time in real time. In this paper, we propose a system setup to collect point clouds with a crane boom attached lidar and a real-time capable algorithm for point cloud processing to augment the operator’s visual field with measurements from the forest machine and its environment. Real-time point cloud data is collected using a low-cost range measuring 2D laser scanner rotated with the forest machine boom as the machine is operated normally. Combining to the boom rotation measurement, the range data is transformed into 3D point cloud model and the model is filtered for any obstacles or moving targets in real-time. The real-time filtering is made possible using a measuring angle based indexing of the point cloud data in an octree data structure. The new proposed indexing method makes it possible to efficiently filter out previous lidar measurements that are seen through in a new range measurement i.e. have a larger range measurement approximately at the same direction than the built model. The same idea could be further expanded for any case where large 3D point clouds need to be efficiently filtered. The limitation of the proposed method is that the ego motion of the lidar has to be small enough for the method to be efficient as the measuring angle based indexing needs to be rebuilt if the lidar moves too much. In the used augmented reality human machine interface, the operator’s head pose is measured using machine vision camera and IMU integration, and the point cloud model is rendered into the operators view. The combined system is tested to work using a forest machine research platform in real time.

Bei Sun, Chunhua Yang and Sirkka-Liisa Jämsä-Jounela. A receding horizon gradient optimization approach for multiple reactor systems

Abstract: This paper presents a two-layer control scheme for multiple reactor system. Based on the characteristics of multiple reactor system, two concepts, including additive utilization efficiency (AUE) and impurity removal ratio (IRR), were designed. By using the two concepts, a gradient optimization approach was proposed. The gradient optimization approach transforms the economical optimization problem of multiple reactor system into finding an optimal decline gradient of the impurity ion concentration along the reactors. A robust adaptive controller is designed to track the optimized system states in the presence of process uncertainties. The ability of the gradient optimization scheme is illustrated with a simulated case study.


Hans Aalto, Jussi Mäkilä and Kari Kunnari. The Role of Automation in Energy Efficiency Improvement

Abstract:
ABSTRACT

Because of obvious reasons it is necessary to continuously improve energy efficiency in all areas of human activity. This paper limits the point of view to process industries and energy production although important areas are also buildings, society and infrastructure including transportation.
Different sub-branches of automation, including (but not limited to) sensors and instrumentation, on-line analytics, process control (algorithms), information management systems, communication protocols and real-time optimization algorithms have for decades been applied within process and energy industries.

The recently issued ISO 50 001 energy management standard dictates that the energy flows shall be measured, monitored, analyzed and reported properly and that successive improvement policies with regard to energy efficiency must be applied. The reporting of Energy Performance Indicators (EnPI) cannot function properly if the measurement sensors in the process are not working properly. One possible action taken based on an analysis utilizing the EnPI's is an improvement in the way that the process is operated. That type of improvement can be implemented in an "open loop" fashion, with a need for human involvement or by "closed loop" operation by means of automatic feedback control. It is also to be noted, that the performance of existing control loops affecting energy efficiency can in many cases be poor due to defects in control loop design and non-optimal controller parameter settings.

The paper shortly discusses sensors - both physical and soft sensors - and through a few examples demonstrate that this field is far from mature i.e. it is developing and opens up new opportunities for energy efficiency improvement. The focus of the paper is to provide examples which demonstrate the very wide range of feedback and model-based control applications. We walk through some examples of simple yet important control loops and then proceed to the area of advanced process control (APC). The first example comes from a big steam system of a petrochemical plant and we conclude with an application which commonly is believed to be extremely difficult to solve: real-time minimization of compression energy in a gas pipeline network.

In industrial processes, energy efficiency can be improved by a PINCH analysis and subsequent rearrangement of heat exchangers possibly leading to considerable changes in the degree of heat integration including heat recycles. These recycles may have a drastic effect on dynamics and controllability of the process and overlooking an update of the control configuration may cause serious performance degradation when restarting the plant after the heat integration project implementation. This serves as another example why it is beneficial to make use of good understanding also in fields of automation and control while evaluating the energy efficiency improvement potential and making recommendations for actions to achieve it.

 

Kai Zenger. LUMA activity: A way to challenge the active minds of young people

Abstract: LUMA is a national activity intended for attracting young people to study natural sciences and technology. The paper gives an overview of the operation of Luma centre Finland and especially Luma centre Aalto.


Hans Aalto and Jussi Koskela. Automation technology opportunities in natural gas transmission - pipelines and LNG

Abstract:
ABSTRACT

In natural gas transmission it may be difficult to classify advanced features to be either "process technology" or "automation technology". For instance, is an on-line, real-time calculation tool based on rigid chemical and thermo-dynamical analysis process or automation technology? Clearly, if the processing of on-line measured data needs advanced conditioning and filtering methods the degree of automation content becomes higher and justifies the label "automation technology".
This paper presents a number of applications within natural gas processing without using too strict classifications. For the sake of completeness, we shall list a number of mature applications but also discuss applications still being in a development phase.
Natural gas pipeline networks utilize automation technologies which are not very often encountered in process industries: one example is leak detection which is based on dynamic mass balance models and measured pressure and gas flow signals along the pipeline. Dynamic models for gas flow in pipeline networks have been systematically developed during the last decades and they have matured so that on-line look-ahead dynamic simulations based on models and real-time measurements are routinely used by pipeline operators nowadays.

It is a demanding practical task to build a pipeline network-wide Supervisory Control and Data Acquisition (SCADA) system, but the technology is mature. Recent achievements on SCADA are an improved reliability to cost ratio. Pipeline compressor stations can be of a quite complicated, multi-machine structure and typically they benefit from using optimal load-sharing controls which are nowadays routine automation deliveries. Comprehensive pipeline system-wide control and real-time optimization, often referred to as "Transient optimization", is still very rare because these tasks are considered to be very complex. The reason for this is that the control and optimization formulations are derived from first-principles pipeline flow models which for sure ends up into a mathematically very complex problem. The paper describes an alternative real-time optimization formulation based on Model-Predictive Control (MPC) which has proved to be very successful in the process industries

The Liquified Natural Gas (LNG) business differs from natural gas transmission in pipelines because some operational challenges are removed because long distance transport between LNG terminals is taken care of by LNG tanker ships. Other challenges are introduced, of which the most important one is the roll-over phenomenon in LNG storage tanks. In the paper, the roll-over phenomenon is described together with prediction and avoidance of the phenomenon which can be seen as an automation technology solution and from a certain point of view it represents an advanced safety automation application.

 

Krista Rahunen. Testing of displays of protection and control relays with machine vision

Abstract: Local human machine interface (LHMI) is the link between the user and the device. In the protection and control relays the LHMI consist of a display, buttons, light-emitted diode (LED) indicators and communication ports. The LHMI is tested before assembly with the visual inspection. The visual inspection test system consists of the camera and lenses, the LED analyser, the software and the computer. Machine vision operations, such as corner detection and template matching, are used to process and analyse the captured images. New camera and optics have been installed in the system. The camera and the software need to be evaluated and improved to support each other, to get better images, and further, improve the first pass yield (FPY).

Matias Waller and Jonas Waller. Identification and Model Structures: Pole-Placement Versus PI- and PID Control

Abstract: A parametric model for a laboratory scale ventilation system is identified. Based on the model, digital controllers are developed using a standard structure for pole-placement design. The performance of the controllers is shown to be crucially dependent on the number of parameters in the identified model: Even though no significant difference can be perceived in the ability of the identified models to describe the underlying system, i.e., no apparent model mismatch in control-relevant frequencies, designed controllers will either perform well based on a statistically worse model or fail based on a better model. The pole-placed controllers are further compared to digital PI- and PID-controllers for which no such sensitivity is observed. The implications of the comparative study for the entire control design process are discussed.


Jukka Koskinen, Tapio Heikkilä and Jari Ahola. A Reconfigurable Robot-Sensor System for Handling Wooden Parts

Abstract: This paper considers modeling and design of configurable autonomous robot systems and gives outlines and an example for flexible robotic handling of parts in industrial applications. We have focus on a re-configurable robot-sensor sub-system which detects and localizes parts in the working space of the robot. The 3D point cloud based object detection and recognition sub-system is based on a variety of exchangeable and configurable components. The configuration concept is based on a flexible architecture, where components and subsystems are integrated with interfaces based on ROS (Robot Operating System). The evaluation of the configuration based design approach was carried out with Key Performance Indexes, like Reconfigurability, Recomposability, Interoperability, Robustness, Performance and Resilience. The results show that the developed approach improve clearly commissioning, configuration and robustness against current standard procedures.
 

Martin Ollus. Experiences of Automation Applications

Abstract: The aim of the paper is to describe personal experiences from the development and application of automation solutions, mainly in industrial environments, on six different decades spanning from late 1969’s to early 2010’s. Although the text is based on personal experience, the described activities are related to the historical context. This relation is studied by considering three dimensions: (1) advances in the theory and methodology, (2) development of enabling technologies, especially information and communication technology (ICT), and (3) needs and expectations in the operational environment. Naturally, these dimensions are interdependent. For instance, the globalization, which takes place in the environment, has had an increasing impact on needs and expectations during recent years. Simultaneously, advances in ICT have been an important enabler and driver for globalization. The enablers also contribute significantly to the appearance of new possibilities, which again generate new expectations and needs on the markets.
The paper describes applications of Systems Theory, which is used as a generic term for the disciplines aiming at understanding systems and their behaviour. The paper presents applications, where the understanding of the reality is used for some specific purposes, like control of the behaviour of the studied system or for anticipation of its future behaviour, e.g. automation solutions. In the applications, models of the real systems and their environment play an important role.

 

Tommi Reinikka, Tomi Roinila, Tuomas Messo and Roni Luhtala. Implementation of Grid Emulator for Online Stability Analysis of Grid-Connected Systems

Abstract: The amount of grid connected power electronic devices is increasing which is known to cause stability issues for the power grid. Causing large amounts of harmonics into the grid is a known issue of power electronic devices such as inverters. The stability of each new inverter introduced into the power grid has to be evaluated and an easy way to do this, without complicated models of the power grid and devices connected to it, is to use impedance based stability criterion. The inverter remains stable if it satisfies the Nyquist stability criterion. For evaluating the stability impedance measurements of the grid impedance and inverter output impedance are required. These values can be calculated using only inverter output current and grid voltage measurements. The impedance is gained by calculating the frequency response analysis between these two values. The perturbation has to be injected from the grid side to gain information on inverter output impedance and vice versa. This study concentrates on gaining information about the inverter output impedance to as high frequencies as possible as accurately as possible so the perturbation is done with a grid emulator. On high bandwidths with small resolution the equipment in use is the most likely issue and this study explains the problems which need to be taken into account. The measurements are done with Pseudo-random binary signal (PRBS) which has excellent characteristics compared to other multifrequency perturbation signals for online measurements. The use of PRBS allows using averaging on a signal which is similar to white noise. A preferable property in frequency response analysis. The experiments shown in the study can be integrated to other systems if needed however in this paper it is done separate to verify the made measurements and impedance values.

 

Matias Palmuoki. Development of a rapid control prototyping system for large diesel engines

Abstract: This paper presents the development and commissioning of a rapid control prototyping system intended for engine control software development. Environmental legislation, increased demand for fuel flexibility and customer requirements put high demand on development of new control algorithms for large diesel engines. To meet the demand for increased functionality more resources need to be deployed, or control software need to be developed smarter. Rapid control prototyping is a widely known method to speed up algorithm development in an early stage. This method and the process of choosing the right suppliers for this case as well as development of the first setup is discussed in this paper. Using the newly commissioned rapid prototyping system is expected to increase the number of new algorithms in the end product by reducing work-steps and lead time in the development workflow. The new rapid prototyping system is fully compatible with Matlab/Simulink thus providing a platform where control engineers can focus on algorithm development without the need for low level programming skills.


Timo Oksanen. Projektilähtöinen oppiminen automaation ja sähkötekniikan maisteriohjelmassa

 


Veli-Pekka Pyrhönen. Vygotsky’s Zone of Proximal Development in Connection with Technology-Rich Blended Learning Environments

Abstract: Currently, universities invest in technology-rich blended learning environments (TRBLE) which are equipped with modern web-technologies, audiovisual hardware, and professional software for learners to practice with. Often, layout and construction of such environments are flexibly organized such that group-work as well as social interaction between learners and teachers are naturally enabled. Therefore, it is possible to link the concept of Vygotsky's zone of proximal development (ZPD) of learners with these new environments, and to investigate the influence of such environments to the size of ZPD of learners. As such, this paper studies Vygotsky’s ZPD in connection with TRBLEs intended for education. To be more specific, a conceptual interpretation of TRBLEs as tool for shaping ZPD of learners is provided, which not only support active self-regulated learning, but also enable learners to surpass their current abilities. The possible influence of TRBLEs as well as teacher facilitated technological and social scaffoldings to the theoretical size of ZPD of learners are illustrated. Therefore, teachers can reorganize and rethink their learning episodes in a way that allow higher-level tasks to be completed within the strictly limited classroom time available. As a result, more can be achieved during time-limited contact teaching, which eventually translates to better learning outcome in course-wide setting.

Jonathan Fagerström and Charlotta Risku. Development of dynamic model for energy management of community electric vehicles

Abstract: Solar photovoltaic (solar PV) is globally considered one of the most promising renewable energy source for power generation. Northern countries have remained careful in their solar PV exploitation due to the evident temporal distribution of solar irradiance. However, a recent paper focusing on the situation in Finland stated that solar PV has the possibility to contribute as a cost-efficient and low-CO2 emitting energy source in our future energy systems, but the authors mentioned a need for more detailed evaluations on local solar resources.

The city of Vaasa in Finland is planning a new city district for about 2500 inhabitants with special focus on innovation, energy efficiency, and sustainability. In order to achieve such an energy system there is a need for a computational model that enables comparisons of different energy system designs. For this purpose, the city of Vaasa is developing a model for dynamic simulations where production and consumption of heat and power is integrated. The commercial software Trnsys is a graphically based environment for transient simulations of energy systems and is in this paper applied to create the model of the city district.

The role of electric vehicles (EV) in future cities is fairly clear and a sharp increase is expected within the coming decades. The main obstacles for achieving this huge rate of change have been summarized as automation, electrification, and vehicle sharing.

This paper meets these global obstacles by assessing the techno-economic situation for an EV fleet in the city district under development in Vaasa. We will model the energy system in the Trnsys software in order to supply knowledge that facilitates fact-based decisions regarding a potential fleet of community EVs. The specific objective with the paper is to describe the modeling principles and primary results for a hybrid solar PV system where the EV fleet is charged through a stand-alone power plant with back-up from the national grid.

Ville Lohikainen. Process simulations for Operator training and DCS verification

Abstract: Along with the technical advancements of simulation software and technologies, the energy and pulping industry has seen an increase in process simulations as part of operator training for new plants and lines. Successful training deliveries worldwide reveal tangible results in terms of improved operator skills, shortened start-up curves and increased operational safety.

Even if the benefits of having well-trained operators are enough to justify the investment in simulator solutions for new plants or major rebuilds, process simulations have more to offer. Simulations are increasingly used for verification of the DCS programming during the DCS FAT, saving hours of troubleshooting during the often stressful plant commissioning phase.

This paper presents a typical Valmet process simulation and how it can be used for operator training and DCS verification. A customer case is presented where Valmet delivered process simulations for all pulp mill process areas for CMPC Celulose Riograndense Guaiba mill in Brazil. The process simulations covered the fiber line and pulp drying units, a complete recovery line, a chemical plant, as well as two water treatment plants. The process simulations were used for extensive operator training and also for verification of the DCS programming prior to startup. For the customer, Valmet simulators were a differentiating factor, reducing safety risks and speeding up the learning curve far beyond what is possible with traditional classroom training.

Mika Billing. MoRo- Modernin robotiikan koulutusyhteistyöhanke

Abstract: MoRo - Modernin robotiikan koulutusyhteistyöhanke on Vaasan, Tampereen ja Turun ammattikorkeakoulujen yhteishanke. Sen tavoite on nostaa nykyaikaisen robotiikan osaamistasoa oppilaitoksissa ja yrityksissä sekä madaltaa kynnystä uuden robottiteknologian käyttöönottoon. Tavoitteena on myös lisätä oppilaitoksien välistä yhteistyötä sekä osaamisen jakamista ja kehittämistä yhdessä työelämän kanssa.

Kaj Juslin. Hållbar digitalisering för integral check av anläggningens dynamik

Abstract: Digitalisering för dynamikstudier kan inrikta sig på en enda processkomponent, ett separat kraftverk, eller på flera konsumenter och producenter sammankopplade med elektriska, fjärrvärme, fjärrkyla eller jordgasnät. Som randvärden behövs estimerade kurvor för konsumtion, väder och vind. Om man under en anläggnings hela livscykel har tillgång till en uppdaterad digital tvilling, kan man när som helst testa hur ingående processer, automation och elektriska system fungerar tillsammans. Kravspecifikationerna på en hållbar digital tvilling är rätt rigorösa. Modellen skall kunna uppdateras av tillgängliga experter på process, automation och el-system med tillhjälp av simuleringsplattformens CAD interface genom att rita process, automations och el-diagram och fylla i behövliga attributvärden. Med grafiken skall det också vara möjligt att kombinera komponenter till en ny subprocess och att specificera en egen ny symbol. Man skall inte behöva anlita experter på vare sig ICT, programmering, eller lösning av differentialekvationer. Simuleringsplattformen för den digitala tvillingen skall innehålla behövliga tillämpningsinriktade bibliotek av trovärdigt verifierade komponentmodeller. Plattformen skall innehålla en databas för lagring av komponenternas parametrar och information hur de är kopplade till varandra. Innehållet i databasen skall enkelt kunna transporteras från datorgeneration till följande. Plattformen skall klara av simulering i realtid. Om man ansluter plattformen till en kopia av kontrollrumsutrustningen har man tillgång till en träningssimulator. Om man ersätter automationsmodellerna med det riktiga automationssystemet kan dynamiken testas i förväg hos leverantören varvid ibruktagandet kan snabbas upp på den riktiga anläggningen. Exempel på lyckad användning av digitala tvillingar till dynamikstudier presenteras mot bakgrunden av datorutvecklingen. Digitala tvillingar kan i framtiden minska på behovet av pappersdokument och nedskrivna procedurprotokoll. Specifikationen av de digitala tvillingarna uppsnabbas ytterligare och risken för tryckfel minskas om komponentsymboler och behövliga attributvärden t.ex. för pumpar eller värmeväxlare i framtiden publiceras av leverantörerna i Internet.

Jussi Niinimäki, Mika Riikonen and Kalle Jokinen. Prosessiautomaatio: Säätöjen optimointi ja Big Datan tehokas hyödyntäminen tuottaa tuloksia nopeasti

Abstract: Automaatiolla on prosessiteollisuudessa tänäpäivänä erittäin suuri merkitys yrityksen kilpailukykyyn. Automaatiojärjestelmän säätäjät ovat vastuussa yli 90 % prosessien ohjauksesta, joten niiden optimaalinen toimivuus on elintärkeä ehto tuottavalle ja turvalliselle prosessille. Kun prosessin säätäjät toimivat epästabiilisti, ne aiheuttavat häiriöitä prosessiin. Epästabiilien säätäjien toiminta johtaa alentuneeseen tuottavuuteen ja laaduntuottokykyyn. Usein epästabiilit säätäjät aiheuttavat myös turvallisuuteen liittyviä riskejä.

Kun prosessia ja sen kriittisiä säätäjiä analysoidaan ja haetaan korrelaatioita koko prosessista, voidaan säätörakenteet muuttaa paremmin toimiviksi ja säätäjien toiminta optimoitua. Tämä vaatii syvällistä ammattitaitoa säätötekniikan ja prosessitekniikan osaamisen saralla. Kun prosessi saadaan optimoitua, vaihtelut vähenevät ja tuotannon tavoitetasoa pystytään nostamaan prosessin edelleen pysyessä stabiilina. Tämä tuo suoraan selkeää tuottavuutta lisääntyneen tuotannon ansiosta, mutta myös merkittäviä muita rahallisia ja turvallisuuteen liittyviä parannuksia:

 Energiankulutus laskee
 Laatu paranee
 Raaka-ainekustannukset laskevat
 Kunnossapitokustannukset pienenevät

Lisäksi saadaan ilman eri investointeja huomattavia parannuksia turvallisuuteen:

 Prosessi- ja henkilöturvallisuus paranee
 Ympäristöturvallisuus paranee

Säätöjen optimointiin tarvitaan paljon mittausdataa, jotta tiedetään tarpeeksi vallitsevia olosuhteita säätöjen toteuttamiseen. Teollinen internet (IIoT) ja digitalisaatio sekä Big Data ovat nykypäivän trendejä, jossa hyödynnetään suuria määriä prosessidataa. Automaatiota laajalta hyödyntävällä prosessiteollisuudella ongelmana ei ole liian vähäinen tiedonkeruu, vaan se, miten tieto saadaan aidosti hyödynnettyä. Täytyy löytää juuri oikeat ja kriittiset mittaukset prosessin toimivuuden ja kunnonvalvonnan kannalta, jotta päästään kiinni oikeisiin asioihin. Varsinkin prosessiteollisuudessa ovat isot datamäärät olleet käytettävissä jo automatisoinnin alkuajoista, 80-luvun alusta lähtien, joten uudesta asiasta ei ole kyse. Itse mittausdata ei monesti kerro paljoakaan, vaan syy-seuraussuhteet löytyvät mittauksien ja ohjauksien laskennallisilla mittareilla. Laskennallisilla analyyseillä päästään monesti huomattavasti paremmin kunnonvalvonnallisiin asioihin kiinni, kuin perinteisellä mittaavalla kunnonvalvonnalla.

Tekstissä käydään läpi, mikä on yleisesti prosessien säätöjen tila ja mitä mahdollisuuksia niiden optimoinnilla voidaan saavuttaa. Elävän elämän esimerkkien avulla osoitetaan konkreettisesti mitä on saavutettu ja pohditaan miten dataa voidaan hyödyntää vielä pidemmälle puhuttaessa esimerkiksi kunnonvalvonnasta ja koko kunnossapidon ohjauksesta.

Mats Friman. Model-Based Design: Experiences from Valve Controller Development

Abstract: This paper discusses model-based design, which is a control design method that utilizes simulation. We present some experiences from adopting model-based design in a real product development project, the development of the Neles NDX valve controller. There are many advantages with model-based design, most notably more efficient development, improved quality, and cost savings. We assess our experience of such advantages and we discuss some examples.

Nikolaos Papakonstantinou and Tommi Karhela. Towards a multidisciplinary platform based on OPC UA for accessing plant data: Open P&ID data access

Abstract: Accessing data related to a complex multidisciplinary system such as a plant is a challenging task due to the different tools and data formats of each engineering discipline. Although in some cases standard information exchange format have been suggested, the Computer Aided Engineering (CAE) tool vendors show a lack of interest in their adoption. This leads not only to vendor lock-in (very difficult migration path from one tool to another) but also to major difficulties in accessing this information for innovative new applications. An open platform is needed to provide access in a unified way to all plant data for allowing the easy transfer of information between engineering disciplines, between lifecycle phases and also to enable innovative applications. The platform presented in this paper is a first step towards this goal. The proposed platform is demonstrated with a case study of a partial mining process (autoclave) where Piping and Instrumentation Diagram (P&ID), control automation and simulation data are accessed over OPC UA using the 3D plant model as a gateway.

 

Tomi Krogerus and Kalevi Huhtala. Diagnostics and identification of injection duration of common rail diesel injectors

Abstract: In this paper, we address the problem of diagnostics and identification of injection duration of the common rail (CR) diesel pilot injectors of dual-fuel engines. In these pilot injectors, the injected volume is small and repeatability of injections and identification of the drifts of injectors is an important factor, which need to be taken account to achieve good repeatability (shot-to-shot with every cylinder) and therefore well balanced engine and furthermore reduced overall wear. This information can be used for adaptive control of injection duration, so that finally the injected fuel volume would be the same as originally, and to diagnostics purposes to guarantee the engine longevity facilitated by consistent operating conditions throughout the life of the unit. A diagnostics method based on analysis of CR pressure signal with experimental verification results is presented. Using the developed method, the relative duration of injection events can be identified. In the method, the pressure signal during the injection is first extracted after the control of each injection event. After that the signal is normalized and filtered. Then a derivative of the filtered signal is calculated. Change in the derivative of the filtered signal larger than a predefined threshold indicates an injection event which can be detected and its relative duration can be identified. The efficacy of the proposed diagnostics method is presented with results which show that the developed method detects drifts in injection duration and the magnitude of drift. According to the result ≥ 10 µs change (2%, 500 µs) in injection time can be identified.

 

Antti Pelkola. Optimization of Milk Powder Production using MPC with Minimum Energy Comsumption


ABSTRACT

In the food industry it is often preferable to produce milk in the form of powder. There are several reasons, such as changing market conditions, long distances of transportations for further processing and environmental circumstances. Milk powder as intermediate product creates also more freedom for flexible logistics.
Milk powder is traditionally dried in a large spray dryer with heated drying air to reach a desired powder moisture content. The spray dryer unit is a huge energy consumer and in principle not very energy efficient. In milk powder production skimmed milk is first concentrated to a desired solid content in an evaporator. In this process unit also a lot of energy is used to remove a big amount of water. In some cases these both process units are dynamically quite strongly coupled together via a limited intermediate volume.
Valio Oy in Seinäjoki, Finland, has recently accomplished an Advanced Process Control (APC) project to increase skimmed milk powder production in co-operation with its APC vendor, Neste Jacobs Oy. The basic idea was to maximize the production with minimum energy consumption using real time optimization with multivariable model-predictive control (MPC) technology. The combination of the evaporator and the spray dryer in the MPC system created highly potential improvements in the milk power production. The realized MPC system is designed to maximize the production by pushing continuously both units against dynamically varying process constraints as well as product quality constraints. The MPC solution utilizes measured and real time calculated process and product quality variables as controlled or constraint variables. Also the controlled key product quality variables are provided by on-line near infrared (NIR) analyser. The MPC application is also equipped with a recipe system, which allows several product qualities to be produced with the same control structure.
The MPC application package has shown within long term production campaigns to be capable of keeping the production 10% higher than before the MPC installation. The operation of the plant is more stable and the drying batches are more equal to each other than before. After MPC installation it has become much easier for the control room operators to produce repeatable drying batches.
The paper is organised as follows: first the process is presented together with the client's expectations towards the real-time optimization and control. The control and optimization strategy is presented with some final performance examples such as normal production maximization and energy minimization at times when maximum production is not available. The MPC implementation projects steps and timeline are roughly outlined and finally project results are presented.

 

Johan Saarela. BIOGAS FRÅN STORMOSSEN

Abstract: Att låta organiskt material ruttna under syrefria förhållanden för att få en energigas, biogas, är inget nytt. Redan från 900-talet f.Kr. finns det indikationer på att biogas användes för att värma badvattenanläggningar i det Assyriska imperiet.

De senaste två decennierna har biogas blivit ett lokalproducerat och förnybart alternativ till importerad fossil naturgas. Inom såväl EU som de enskilda staterna görs planer och mål upp för hur andelen förnybar energi kan ökas inom de olika sektorerna. Inom transportsektorn är uppgraderad biogas, biometan, redan idag ett reellt alternativ. I Europa finns cirka 1,2 miljoner gasdrivna fordon varav endast 2 000 finns i Finland. I Sverige däremot finns över 50 000 gasdrivna fordon som tillsammans ersätter över 1 TWh fossilt bränsle.

Marknaden för biogas har en stor tillväxtpotential i Finland, fler och fler nyinvesteringar görs årligen. Det finns också en stor outnyttjad potential i befintliga anläggningar. Cirka 20 procent facklas årligen bort som obehövlig gas. Genom automatisering och optimering går spillet att minska, vilket också innebär ökade intäkter eftersom energin tas tillvara.

Fördelen med biogasproduktion är att allt slags organiskt material kan blandas ihop och ändå producera gas, men det är också det som blir problemet. Anläggningen kan inte justeras in till ett specifikt värde utan behöver kunna fungera i ett brett intervall. Det är inte endast en mekanisk process utan också en biologisk, där mikroorganismernas levnadsmiljö spelar en stor roll för hur väl produktionen fungerar. Ett konventionellt kraftverks kapacitet kan i viss mån regleras upp och ner medan en biogasanläggning kör året runt i decennier så länge organismerna får tillräckligt med mat.

Jouni Aro. Using OPC Unified Architecture to Enable Secure Industrial Internet Solutions

Abstract: OPC Unified Architecture (OPC UA, IEC 62541) is becoming the communication standard for Industrial Internet solutions. The German Platform Industrie 4.0 has defined it to be the communication protocol of choice for the Reference Architecture for Industrie 4.0 (RAMI 4.0). The German Federal Office for Information Security (BSI) has evaluated OPC UA and recommends it for industrial environments because of the built-in security mechanisms.
OPC has a strong background in industrial automation systems as an interoperation protocol since 1990’s. OPC Unified Architecture is the second generation of the protocol, which has been designed from grounds up to replace the “Classical OPC” with a robust, high-performance, secure protocol that is platform independent. It enables flexible integration of different systems that are dealing with measurement data, including current values, alarms and historical data – and extends to incorporating various existing information models, such as IEC 61131-3 (PLCopen), ISA-95, AutomationML, etc.
OPC Foundation is co-operating with all major standardization organizations that are defining information models, to also incorporate the existing models to OPC UA. This makes OPC UA capable of providing a standard means to communicate various information models through the same channel and therefore integrating different systems and information models together. This makes OPC UA a truly unique platform for the Industrial Internet.
Reference Architecture for Industrie 4.0 (RAMI 4.0) is a complete architecture model for future manufacturing processes, including product life-cycle management (based on IEC 62890), production hierarchy levels (based on IEC 62264/ISA-95 and IEC 61512/ISA-88) and different business layers. The model is based highly on various existing standards and OPC UA is selected as the standard communication protocol for the model. OPC UA has the capability to standardize the connectivity between all manufacturing automation systems.
RAMI 4.0 is also defining a concept of an Industrie 4.0 component. The concept defines an “enhanced product”, where an Administration Shell is surrounding any physical component that can be connected to the Industrie 4.0 compatible environment. The Administrative Shell is a software component that enables configuration and management of life-cycle data for the physical component. OPC UA will provide the standard communication for the Administrative Shell.
Other international organisations, such as Industrial Internet Consortium and Made in China 2025 are following the progress of Industrie 4.0. The reference architecture models of these programs are being synchronized with the RAMI 4.0. This may lead to a globally standard model.
Prosys OPC UA Java SDK provides a first class library for developing OPC UA applications in Java. The 100% Java implementation of the OPC UA communication protocol and a high-level application programming interface (API) provide a solid basis for building secure and reliable applications and software components for the needs of Industrial Internet. The SDK includes a code generation capability, which enables any OPC UA information model to be converted to Java classes that are easy to use in development. The information models enable integration of systems and components on a semantic level, in addition to pure communication level. The code generated classes ensure a true Java API for using the information models in all types of OPC UA applications.


Marita Hietikko. Certificating a safety related part of a control system

Abstract: Certification is a written evidence for indicating that an object, person or organization has certain characteristics and fulfils certain national or international requirements. This confirmation is usually made in the form of external review, assessment, audit or education. There are different types of certifications. In the first-party certification the product manufacturer or service provider offers assurance that certain requirements have been met. In second-party certification, an association to which the organization or individual belongs can provide the assurance. Third-party certification includes an independent assessment which declares that specified requirements relating to a product, person, process or management system have been met. The accreditation body may allow a notified body to provide verification and certification services. These services are intended to ensure and assess compliance to the previously defined standards and regulations, but also to provide an official certification mark or a declaration of conformity.

This paper highlights the significance and advantages of utilizing third-party certification services to prove the functional safety of programmable electronic system or a safety related part of a control system or a safety related control functions of machinery or other automation systems. Activities and important issues relating to hardware and software safety life cycle process are discussed, based especially on the requirements of IEC 61508 and ISO 13849-1. The focus is especially on the logic units intended to use for safety functions (for example: safety logics, logic units for two-handed control devices, logical processing components for safety system signals used in bus and coach safety).


Antero Ollila. How big is the role of CO2 in the global warming – are the tight carbon emissions justified?

Abstract: According to the Intergovernmental Panel on Climate Change (IPCC), the global warming since the industrial-ization 1750, is due to the increased Greenhouse (GH) effect caused by increases concentrations of the GH gases. Despite of this claim, IPCC has not been able to assess, what are the contributions of GH gases in the GH effect. The author has carried out spectral analyses and his results are found to be: H2O 81 %, CO2 13 %, O3 4 %, CH4 & N2O 1 %, and clouds 1 %. Because the total absorption is already 93 % from the max-imum in the altitude of 1.6 km, which is the average global cloud base, the GH gas impacts are almost the same in the clear and all-sky conditions. The warming impact of CO2 is very nonlinear and it means that in the present climate the strength of H2O is 11.8 times stronger GH gas than CO2, when in the total GH effect this relationship is 6.2:1.

According to this study the commonly applied radiative forcing (RF) value of 3.7 Wm-2 for CO2 concentra-tion of 560 ppm includes water feedback. The same value without water feedback is 2.16 Wm-2 which is 41.6 % smaller. The author has analyzed the value of the climate sensitivity (CS) and the climate sensitivity pa-rameter () using three different calculation bases. These methods include energy balance calculations, in-frared radiation absorption in the atmosphere, and the changes in outgoing longwave radiation at the top of the atmosphere. According to the analyzed results, the equilibrium CS (ECS) is at maximum 0.6 °C and the best estimate of  is 0.268 K/(Wm-2) without any feedback mechanisms. The latest warming scenarios of IPCC for different CO2 concentrations until the year 2100 include the same feedbacks as the 2011 warming i.e. only water feedback. The CS value of 3.0 °C would mean that other feedback mechanisms should be stronger than water feedback. So far there is no evidence about these mechanisms even though 40 % of the change from 280 ppm to 560 ppm has already happened.

The nature of water feedback can be analyzed during the period from 1979 to 2015, when there have been accurate temperature and humidity measurements available globally. The atmospheric Total Precipitable Water (TPW) changes during ENSO events are the essential parts of the El Nino Southern Oscillation (EN-SO) process. The TPW changes during the ENSO events almost double the original ENSO effects. On the other hand, during Mt. Pinatubo eruption and during the three latest solar cycles, the long-term water feed-back effect cannot be found despite of rapid warming from 1980 to 2000. This empirical result confirms that the assumption of no water feedback in calculating the climate sensitivity of 0.6 ºC is justified. Because there is no long-term positive feedback, it explains why the IPCC model calculated temperature 1.2 ºC in 2015 is 44 % greater than the average 0.85 ºC of the pause period since 2000.

The recycling of carbon dioxide (CO2) fluxes between the atmosphere, the ocean and the biosphere is the key process in maintaining the CO2 concentration in the atmosphere. The author has developed one dimen-sional semi-empirical atmosphere-ocean-biosphere model 1DAOBM-2 based on the four-box presentation. The 1DAOBM-2 contains two major parameters, which have been tuned to adjust the total CO2 net flux rate and the anthropogenic net flux rate from the surface ocean into the deep ocean based on the observed values. The surface ocean part is based on the known dissolution chemical equations according to Henry’s law depending on the atmospheric CO2 concentration and the surface ocean temperature. Simulations have been used to calculate the dynamic responses to the step changes from the actual fossil fuel rate to zero in 1964. The results show that the anthropogenic CO2 decay rate follows very accurately the observed decay rate of radiocarbon 14C having the residence time of 16 years. This is the expected result according to nature of anthropogenic CO2 in the system of the atmosphere, the ocean and the biosphere. The decay rate of the total CO2 in this system is much longer having the residence time of 55 years matching the adjustment time of 220 years. The simulations of the atmospheric net CO2 rate by 1DAOBM-2 from 1960 to 2013 shows that the coefficient of determination r2 = 0.75 (r2 = 0.81 eliminating the Pinatubo eruption effects). The simula-tions also show that the present anthropogenic CO2 fraction in the atmosphere is 8.0 %, and it explains the observed δ13C value of -8.4 ‰ extremely well. The problem of the sink between the ocean and the biosphere could not be solved totally. A mass balance study shows that before 1956, the ocean and/or the biosphere acted as a source for the total CO2 increase in the atmosphere and thereafter as a sink. This study suggests that the division ratio between the ocean and the biosphere is 60% / 40 % for the period from 1750 to 2013. The high correlation between the ocean uptake and the net increase of the total atmospheric CO2 strongly indicates that the ocean has been the sink after 1956.

The future scenario simulations by 1DAOBM-2 show that if the CO2 emission stays at the present level of about 10 GtC/y, the atmospheric CO2 concentration would stabilize at the level of about 700 ppm in 2300. The important question is, what would be the warming impact of this CO2 concentration. According to the analyses of the author, the temperature increase would be 0,77 ⁰C. Two conclusions can be drawn about this result. The present estimate of the global warming since 1750 is 0,85 ⁰C. The warming impact of CO2 in the present climate is only 0,3 ⁰C. There are other forces affecting the global temperature.

The temperature history of the Earth shows that the Roman and Viking periods about 2000 and 1000 years ago, were at least as warm as now and they were called optimum climate conditions. The retreating glaciers in Canada has revealed forests, which were growing during the period from 700 to 1000. The climate of the earth is continuously changing. There are other forces than the GH gases, which are causing the major part of the global warming. These forces are cosmic forces and the main contributor is the Sun, which provide 99,97 % of the energy of the Earth. On the other hand, CO2 is not a pollutant of the atmosphere but it is an essential gas for the life on the Earth. In the concentration of 150 ppm, the plants start to die. The optimum concentration of CO2 for the plants is about 1600 ppm. Considering these facts, the planned restrictions of CO2 emissions and the time schedules are too tight.
 

Hayder Al-Hakeem and Janne Koljonen. Addition of Variable Length Floating Point Numbers in FPGA

Abstract: Numerical computation is plagued by the finite word length and the consequential accuracy and complexity issues.

As for accuracy, simple over and underflow cases are not the only problem. Even small rounding errors may accumulate and lead to totally erratic results, particularly when the computing task works close to roots, chaotic differential equations, or with a wide dynamic range. In limited cases, these can be handled, e.g., by modifying the algorithms, order of operations, using wider number formats, or estimating and compensating for the rounding errors programmatically. The important fact is, however, that the hardware does not give much support to detect and handle accuracy problems and numerically unstable arithmetical operations. In the worst case, catastrophic numerical results are not revealed until in the final product, and the result may be total malfunction. If this happens in critical systems, it is not sure whether even redundant numerical systems are then redundant is this case, as they have passed all the tests that, evidently, have not incorporated the critical input sequence.

As for complexity of the hardware, parameters to look after are silicon area/number of logic elements, latency, maximum throughput, and, nowadays of increasing interest, energy consumption. The dominant factor of the energy efficiency is the amount of data. Hence, the idea of increasing the word length should be considered twice. Moreover, the more bits the arithmetic units need to handle, the worse the timing characteristics become. This not just due to the increase of complexity of the basic addition and multiplication components but the extra logic that, e.g., floating points operations require: shifters, exception handling, etc. Nevertheless, standard floating point exceptions cover only a limited set of extreme cases, such as divided by zero.

One approach to pursue for better accuracy with a limited increase in complexity is the use variable length floating point numbers. John Gustafson has suggested a universal number format (unum) that has extra fields added to the IEEE 754 format. In unum (x, y) format, the length of the exponent field is determined by x bits, and length of the mantissa y bits. However, changing the actual length of the exponent, for a given number, also changes the exponent bias that is used in the interpretation of the value of the number. This causes extra logic in the hardware implementation.

Another addition in the unum format is a bit (ubit) that is used to indicate whether the number has been rounded or not. The hardware can increase the actual bits in use the keep the results accurate. This naturally increases the amount of data and energy consumption. Therefore, there could be on option that an application could determine the required accuracy. When the ubit is active, unums represent number intervals instead of accurate numbers. This property could be used to control accuracy and detect and handle problems already in the hardware level, thus maybe simplifying software testing.

Up to our knowledge, no hardware implementations of universal numbers exist in the literature. In the paper, a limited set of unum hardware is implemented in VHDL, simulated using SystemVerilog testing, and synthesize in FPGA. The scope is limited to unsigned addition in order to study its implications to the hardware architecture of the standard algorithm of floating point addition. The FPGA logic complexity and timing characteristics are tested and compared using several unum (x, y) formats. Moreover, possibilities to utilize variable length number formats in the hardware and software level are discussed.


Olli Saarela and Ari Poukka. On-Line Pulp Quality Information

Abstract: Production of pulp from wood is a significant industrial activity in Finland. In 2015, more than 7 million tonnes of pulp was produced in Finland. Important pulp quality parameters include brightness, dirt level, fibre length distribution, and the potential of forming strong and even fibre network. On-line measurements for the first three are available, but the fourth one related to bonding and fibre strength can only be measured in laboratory. In laboratory analyses, the pulp sample is first refined, then made into a paper sheet, dried, and finally measured. This takes time and incurs expenses, both factors limiting the number of analyses that can be carried out. Furthermore, laboratory analyses have been found to include a level of measurement noise that is not small enough in relation to the actual quality variation of pulp. All these factors together make the laboratory analyses less than ideal for pertinent quality control.

A new on-line measure of pulp quality was developed and named Botnia FOX. Evident connections have been demonstrated between Botnia FOX and paper/board machine operation. Depending on the control strategy of each paper/board machine, e.g. whether pulp is refined to constant freeness or with constant specific energy consumption, the connections have been observed in one or more parameters like pulp consumption, paper/board machine runnability, energy consumption, and quality variables of the produced paper.


Vladimir Bochko. Regressive Principal Component Analysis

Abstract: In this paper, we introduce the technique called Regressive Principal Component Analysis (RPCA). RPCA has many features like PCA and capable to capture the nonlinear data structure. The method is efficient to work with a data with a low degree of nonlinearity in automation control.

Risto Tiusanen and Jere Jännes. Techno-economical evaluation of operating concepts in novel mobile machinery systems

Abstract: Increasing needs for better productivity, better mobile work machine utility and higher work quality is driving the work process management towards automated production control or material handling process control instead of improving the management of separate manual work machine operations. In the mobile-machine manufacturing industry, there is increasing need to understand system-level aspects and to learn how to identify, analyse and control new risks in complex automated mobile work-machine systems.

In early system development phases, it is important to be able to evaluate different concepts and compare them in a systematic way to be able to make well-reasoned decisions how to proceed. Not only from the technical point of view but also estimating life cycle costs and assess possibilities for upgrading systems during their operational life time. Based on experience, it can be said that the work done before the start of the actual system design, and the resources used, both time and money, pay for themselves in the later stages of the system design process.

Development from single manual machines to automated machine fleets has brought questions related to appropriate and cost effective automation level, system-level safety requirements for unique machinery applications and how to manage system-availability issues throughout the life cycle. Work processes will change, operating concepts will change and there will be needs to think what kind of infrastructure will be needed to enable automated operations. Process designers need to change mind setting from task optimization - towards process optimization.

The special needs for evaluation of operating concepts comes from questions related to level of automation/autonomy, new operating principles: driverless operation, remote operation, fleet operation from one cabin etc. These is also need to think about the whole process operation not just one task. The situation becomes even more complex, if there are needs for both manual operations and automated operations parallel or synchronized in the same area or linked to each other in the same work process.

In this paper we will describe the approach and applied methods for multi criteria decision analysis of operation concepts in the context of complex mobile work machinery. The aim of these methodology is to support the decision making process and the establishment of automation strategy in early conceptual design phase and to support the definition of appropriate automation development steps.

The approach follows the general systems engineering approach and it combines different perspectives and methods such as technical feasibility studies, safety and reliability risk analyses and procedure for rough estimation of system life cycle costs. The approach has the following phases:

- Description of the work process, its phases and main functions
- Identification and classification of important process requirements for evaluation
- Definition and setting the concept evaluation criteria and metrics for machinery
system operation concepts
- Description of the alternative concepts
- Concept screening
- Concept scoring

In this paper we will also describe some results and experiences of the implementation of this techno-economic evaluation approach and the methods used in a case of concept development for a large scale material handling system utilizing heavy duty mobile machinery in an underground environment. The case company was finalizing conceptual design including production control and material handling systems for their first of a kind industrial plant. The case company was starting the detailed design of the plant and all its technical systems. This means among others that the company has to decide the level of automation for the machinery fleets in the planned processes.

The aim of the work in this case was to support the company in their decision making process in the early design phases of the plant design. The objective of the work was to evaluate the material handling system concepts against the defined process and production requirements. The three most significant KPI’s in this case were safety, availability and lifecycle costs. Safety being the most important criteria.

Three different operating concepts were studied in this case. These were manual operations, remotely controlled operations and automated operations. For this study control modes were defined as follows:

- Manual control: Operator uses/controls machines at the actual working site
executing the whole working sequence.
- Remote control: Operator uses/controls machines from distance (near by the
machine and/or e.g. from a control room) executing the whole working sequence.
- Automated operation: Either the automation system executes the whole working
sequence fully automatically or the operator gives permission to proceed the
sequence step by step typically from the control room.

Experiences of the case study will be discussed including among others the following viewpoints:

- Work flow and decision making logics to exclude/rule out options/concepts that
are not feasible
- Functional description using functional block diagrams
- Framework for requirement specification
- Identification of essential safety and reliability risks related to operating
modes (manual, semiauto, auto)
- Concept screening in two phases: technical / application specific feasibility
- Concept scoring conducted by utilizing coarse Life Cycle Cost (LCC) analysis for
selected functions


Jarmo Alander and Hayder Al-Hakeem. unum++: A Flexible Floating Point Format, Some Experiments

Abstract: In this report we will present a C++ implementation of flexible floating point numbers
resembling the format that was introduced by John Gustafson. The implementation makes it possible to do some monitoring and analysis of floating point calculations.
In addition the importance of floating point calculation precision and rounding error monitoring automation is discussed.

Patrick Zucchetti and Josu Takala. The modeling of knowledge and technology based uncertainty in decision making

Abstract: ABSTRACT (EXTENDED)

The emphasis has shifted significantly from the traditional risk management, which can be defined as various modes of “protecting the system and its users from the failures in the system”, towards uncertainty management since uncertainty can provide both opportunities as well as dangers to the performance of the system /8/. However, in order to manage uncertainty it has to be somehow measured or modeled. The study presented in this paper aims to do so by creating a model based on three distinctive methods and then apply the model in an energy industry case. As a part of the Finnish energy industry the case company has faced a lot of changes in the market since the traditional energy industry in Finland, which was before restricted under the national legislation, has now encountered a wave of denationalization of operations and an entrance to a more market economy-like environment /3/. This great change has brought a grown uncertainty over the capital intensive industry and for this reason more research is truly needed in order to support the decision makers /3/.

It is important to be able to make decisions although there are many things that might hinder effective decision /6/. For example, there might be many potential options but only few can be realized since the resources are usually limited /6/. The first method used in the uncertainty modeling is the Analytic Hierarchy Process developed by Thomas Saaty. The idea of the AHP method is to give a method of scaling that evaluates tradeoffs between different criteria such as cost, quality, dependability and so on /5/. In this study of uncertainty modeling, AHP was used to prioritize the criteria the case company had defined for its investment decision making. Then the weighted criteria were inserted to the sand cone model which is the second main method used in the study. The sand cone model illustrates the studied object by showing its hierarchies as well as the relative importance and relationships of the sub-objects /7/. In addition to the hierarchy of decision making criteria the sand cone model was also used to depict the actual uncertainty.

As Braun defines in his book “Technology in Context” there is always two parts in technology: the first one is hardware such as material artefacts and the second is software such as knowledge which is necessary and immediately associated with the hardware /1/. Furthermore, “uncertainties regarding technology choices, especially from the strategy perspective, may compromise investment projects with regard to, for instance, their budgets and timetables” /4/. In this sense, analyzing uncertainty in terms of knowledge and technology was rather logical especially in this energy industry investment decision making case. In practice, the evaluation was done with the requirement section of the sense and respond methodology (S&R), in which the respondents have to evaluate each S&R questionnaire attribute in terms of basic, core and spearhead technologies /9/. In the new model the S&R questionnaire attributes were replaced with the investment criteria. All in all, the respondents were asked to determine their view of the company’s technology levels (i.e. the amount of basic, core and spearhead technology in the company) with respect to the decision making criteria. This was done with a specifically designed questionnaire form. The uncertainty was determined from the differences in respondents’ views by calculating so called variability coefficients from the answers. Also other measures were considered such as the standard deviation but the variability coefficients seemed to depict the uncertainty best both theoretically as well as in practice. Lastly, the variability coefficients were inserted to the sand cone in the form of risk that can cause collapses in the sand cone layers /10/.

Through the used methods some uncertainty could be detected in the investment decision making of the case company /10/. For example, from the sand cone model could be seen that there are many layers in which the risk of collapses is over 100 percent and hence the investment decision making can be questioned /10/. The next step was to study which technology level caused the most variability among answers /4/. Since the spearhead (i.e. future) technology and knowledge was determined as the main source of uncertainty followed by the core technology, a conclusion could be made that the company bases its technology and knowledge management mainly on basic technology /4/ /10/. This was not determined only as disadvantage since basic technology could be seen also as a way to secure the distribution of energy to the company’s customers /10/. All in all, from the big picture could be argued that knowledge and technology seems to be essential in the investment decision making /10/. All the results of the study were validated with the market-based validation introduced by Kasanen, Lukka and Siitonen in their article “The Constructive Approach in Management Accounting Research” (1993). The market-based validation includes altogether three different innovation diffusion based market tests two of which were used in this study /2/.

REFERENCES

/1/ Braun, Ernest: Technology in Context: Technology Assessment for Managers. London, New York: Routledge, 1998, p. 165.

/2/ Kasanen, Eero, Kari Lukka & Arto Siitonen (1993). The Constructive Approach in Management Accounting Research. Journal of Management Accounting Research. 5(1993), 243-264.

/3/ Mäkipelto, Tomi & Josu Takala: The model based on the analytic hierarchy process for dynamic decision making in the energy industry: a case analysis of investment energy production in Finland. International Journal of Nuclear Governance, Economy and Ecology. 2(2009) 3, 281-295.

/4/ Räikkönen, M., Takala, J., Toshev, R., Välisalo, T., Uusitalo, T., Kunttu, S., Tilabi, S., Daneshpour, H., Shakeel, S.R., Zucchetti, P., Vatunen, A.-M.: Towards risk-conscious investment decision-making and value creation. VTT Technology 282, 2017.

/5/ Saaty, Thomas L.: The Analytic Hierarchy Process. United States of America: McGraw-Hill, 1980, p. 287.

/6/ Takala, Josu, Jarkko Hirvelä, Tapani Talonen & Riitta Vuolteenaho: Making Customized NPD Project Decision for Technology Roadmapping. IAMOT 2005.

/7/ Takala, Josu, Juha Leskinen, Henry Sivusuo, Jarkko Hirvelä & Tauno Kekäle: The sand cone model: illustrating multi-focused strategies. Management Decision. 44(2006) 3, 335-345.

/8/ Takala, Josu & Teuvo Uusitalo: Resilient and Proactive Utilization of Opportunities and Uncertainties in Service Business. Proceedings of the University of Vaasa. Reports 177, 2012.

/9/ Takala, Josu, Jari Koskinen, Yang Liu, Mehmet Serif Tas & Martti Muhos: Validating Knowledge and Technology Effects to Operative Sustainable Competitive Advantage. Management and Production Engineering Review 4(2013) 3, 45-54.

/10/ Takala, J., Zucchetti P., Daneshpour H., Kunttu S., Välisalo T., Pirttimäki J. & Kiiski P.: The evaluation of investment decision making with knowledge & technology rankings and the sand cone model. Proceedings of the International Conference on Innovation & Management ICIM2016, November 28-30, 2016, Kuala Lumpur, Malaysia, Wuhan University of Technology Press.

John-Eric Saxén, Jerker Björkqvist and Hannu Toivonen. Adaptive diagnostics of internal combustion engines using data-driven anomaly detection and symbolic approximation

Abstract: Extended Abstract

Large combustion engines used in marine traffic or power plants have become increasingly dependent on computerized control and diagnostics. Meanwhile, the availability and amount of measurement data obtained has increased order of magnitudes thanks to advances in sensor and embedded computing development. While the mechanical and chemical aspects in combustion engines have seen limited advances during the recent decades, it is now recognized that large increases in efficiency and decreases in operating downtime can be achieved thanks to data analytics and computer control, without the need to invest in re-building the existing machinery. This development has in turn lead to an increased need for companies to obtain expertise in the field of data analytics and to develop novel data-driven methods.

In diagnostics of combustion engines, traditional implementations have been based on threshold detection. A minimum and maximum threshold for each sensor reading is hard locked, which when exceeded will trigger an alarm. This has lead to several issues, for example with false alarms, which occur when a threshold is specified too tight for a given condition. Furthermore, calibration and tuning of the diagnostic methods have typically been performed on-shore rather than being tuned automatically and on-line. The goal of this research has been to develop an adaptive data-driven diagnostic method, which can react to the environment and adaptively change the way that alarms are triggered based on previous data. This, in turn, allows the same diagnostic algorithms to be run on a different set of engines under different conditions.

In this paper, a data-driven diagnostics method was developed utilizing an existing technique for data compression and dimensionality reduction. Data-driven diagnostics has the main advantage of not requiring a priori knowledge about the system and the parameters can easily be adapted during the engine lifetime. In order for data analysis to be performed efficiently and in real-time on big datasets, it is usually necessary to reduce the data dimensionality through discretization. It is, however, desirable that the discretization preserves the central trends and characteristics in the data. A method developed by (Lin et al.) known as Symbolic Aggregate approXimation (SAX) fulfill these requirements and has been successfully applied e.g. on gene sequencing and diagnostics of rolling elements bearings (Georgoulasa et al.). In SAX, a reduced time series is obtained by first performing a piecewise aggregate approximation, which transforms the original time series into s equally sized segments with a mean value computed for each segment. In the second stage, data numerosity is reduced by first normalizing the data and then discretizing the values into w alphabetic symbols, where breakpoints are determined optimally so that each symbol occupies an equal-sized area under the Gaussian curve.
In the adaptation of SAX into diagnostics, we consider transition analysis of the discretized time series. By analyzing the transformed time series within a certain time window, the discrete transitions between symbols can be mapped into a probability matrix, which tells the likelihood of a transition occurring between two symbols as a ratio of the total number of transitions. By mapping transitions between all symbols, it is possible to detect diagnostic anomalies both within the thresholds as well as obtaining an adaptive outer threshold, by considering the first and last symbol of the alphabet as representing a fault condition. In order to apply this method for on-line diagnostics, a sliding time window size and segment size must be determined either manually or from the data. The segment size in our method been determined based on the variance of the observed time series, whereas the sliding window size is process dependent and defines the adaptability of the method. A longer sliding window implies more constant thresholds and less tolerance to trend changes, whereas a smaller window allows the method to adapt to level changes in the data and still allows identifying of abrupt changes.

Performance of the diagnostic method is demonstrated with both simulated and real data examples. In the simulated case, a model of the common rail fuel pressure variations was developed using a Simulink mass balance model with inflow and outflow valves. The model makes it possible for common faults to be simulated, for example injector faults and tank leakage. This allows both residual analysis between the model and simulated faulty data, as well as adapting the data-driven method directly on the simulated fault data. In the next step, the diagnostic method was validated on real data obtained from a full-scale Wärtsilä engine-generator set. The inlet and outlet water temperature sensor readings from the engine’s low-temperature cooling circuit were chosen, as these time series contained both trend changes as well as abrupt changes in temperature. The results from simulations demonstrate that the method is able to detect anomalies in real datasets with large trend changes and variance, which cannot be achieved using a fixed threshold-based approach.

(Lin et al.) Experiencing SAX: a Novel Symbolic Representation of Time Series Data Mining and Knowledge Discovery. J. Lin, E. Keogh, L. Wei, S. Lonardi. Data Mining and Knowledge Discovery. Volume 15, Issue 2 (2007), pp 107-144.
(Georgoulasa et al.) Rolling element bearings diagnostics using the Symbolic Aggregate approximation. G. Georgoulasa, P. Karvelisa, T. Loutasb, C. D. Styliosa. Mechanical Systems and Signal Processing. Volumes 60–61 (2015), pp 229–242.

Outi Rask, Mikko Laurikkala and Mikko Salmenperä. Revised Distillation Column Laboratory Offers New Opportunities for Automation Education in Tampere3 Universities

Abstract: The evolution of automation calls for up-to-date education. An automation system contains modern
information and telecommunications technology, and intelligence is built in nearly all devices. The
students of the field must not only adopt the latest technology but also master legacy systems.



Tampere University of Technology (TUT) has a distillation column in research and education use. The
actual process involves a closed circulation where the compound of water and ethanol is being
distilled. The process has several controlled variables and complex interactions between them. The
hardware of the distillation column is more than 30 years old, while the automation system around it
has evolved over the years. This paper describes the latest reform of the automation system, where
particular attention was paid to modern educational use.



In the modernization project, the automation system in its entirety was revised. All equipment and
software were replaced, and also some of the field devices were changed. On the other hand, some of
the older field devices remained. Thus, also legacy technology is introduced to the students. Not all
equipment in real-life industrial processes is most modern.



The reform aimed at meeting educational needs in particular. The examples of this include
measurement circuits that can be easily disconnected from the connection strip, thus enabling easy
testing of the analog signal. Process control is distributed to two process stations. Components familiar
from industrial systems were introduced, such as a separate field box where measurement connections
are assembled. The system comprises all components relevant to process automation distributed as in a
real automation system.




The new distillation column environment serves the automation education of TUT at several levels. At
the first level, first year students study the equipment with the power turned off. The task could be, for
example, finding given devices with the help of the process chart. Augmented reality creates new
opportunities: a mobile application could give additional information on the device currently in the
picture.



At the second level, the power of the automation system is turned on but the process is only running
cold. The students see flow control loops in action and learn how a controller works. At the third level,
the students are allowed to configure controllers, still with the cold process however. Finally, master
level students advance to the fourth level, where they heat up the process and get to control the
distillation.



In addition to the education of TUT, these reforms and new features benefit Tampere3 projects. The
goal of Tampere3 is to integrate Tampere University of Technology, Tampere University, and
Tampere University of Applied Sciences (TAMK) into one university. Reforms make new kind of
collaboration between TUT and TAMK automation education possible. Virtualization and solid
laboratory network connecting these two universities together safely offers possibilities to design new
teaching materials and exercises for students in both universities - without significant investments.



Cooperation in automation education between TUT and TAMK has been discussed in Control3
workgroup which consists of automation teachers from both universities. Control3 discusses the
possibilities Tampere3 collaboration offers for both automation engineering educations. There are
some similarities in control theory education in both universities and some of it can be implemented
together in the future. However, the main opportunity lies in the laboratories of both universities.
TAMK has good student laboratories, for example, for teaching PLC programming. In addition, they
have good equipment for teaching automation technological principles, such as control theory and
measurement technology. On the other hand, TUT has more complex learning environments, such as
distillation column, headbox process and batch control process with distributed control systems.


Collaboration started in autumn term 2016 when 17 students from TAMK were invited to visit the
distillation column laboratory. Students spent one day at TUT and worked with the column: the
process was started up, controlled and shut down. In addition, students access field devices and
determine the structure of control system and control application. The idea was to get to know the
process as a whole. From discussions in a laboratory and reports students wrote afterwards the day
seems to be rewarding. Students liked this kind of learning opportunity and said that this exercise
concretize many issues they have learned previously during their studies.


Kurt-Erik Häggblom. Design av experiment för identifiering av multivariabla dynamiska system

Abstract: Det finns flera utmärkta böcker om systemidentifiering, men i dessa sägs inte mycket om design av experiment för multivariabla system. Det mest avancerade rådet är att alla insignaler skall exciteras samtidigt på ett okorrelerat sätt. Man har dock märkt att detta inte fungerar så bra för illakonditionerade system, som har en förstärkningsmatris som är nästan singulär. Dylika system är inte ovanliga i processindustrin.

En viss metod, och några varianter av den, har föreslagits för design av identifieringsexperiment för att avhjälpa detta problem. Huvudidén är att de s.k. förstärkningsriktningarna skall exciteras på ett okorrelerat sätt. Ett estimat av systemets statiska förstärkningsmatris krävs för experimentdesignen. I detta bidrag ges en översikt av denna metod. Vanligtvis har den tillämpats på små system med två eller tre in- och utsignaler. Här visas en tillämpning på ett system med fyra in- och utsignaler.

Som av det visade exemplet framgår, uppnår man inte helt önskat resultat p.g.a. systemets dynamik. En utvidgning som beaktar dynamiken presenteras därför. Detta är en ny metod, som kan implementeras på ett liknande sätt som den föregående metoden, och som därmed är praktiskt användbar. Andra metoder som föreslagits i litteraturen för att beakta dynamiken bygger på komplicerade, mindre praktiska, optimeringsförfaranden.

Bhimsen Ghimire and Petri Välisuo. Off-grid energy management system

Abstract: The Meteoriihi exhibition centre is disconnected from the main power grid, and it produces all energy it needs locally by means of wind and solar energy, having a diesel generator as a backup system. Energy is needed for lighting, computer systems of the co-located star observatory by Vaasa Andromeda, audio-visual equipment of the exhibition and for heating. Off-grid system must maintain the balance between consumption and production on its own, which can be demanding, especially when the energy is produced mainly by using intermittent renewable energy sources. The Meteoriihi energy system contains a batteries for balancing production and consumption, and a diesel generator to provide backup energy, but in addition a flexible demand side management is also needed. The demand side management is most useful for consumption which requires a lot of energy and is not too time critical, like heating, which requires most energy during cold seasons but can be time shifted when needed. Typically electrical heating appears as a one big load which can be switched on or off. This switching creates a huge stepwise increase or decrease of load, which may cause degradation of the quality of electricity, and may be difficult to compensate with renewable energy production. Therefore the heating elements should be better adjustable to serve as flexible controllable load to help maintaining energy balance instead of endangering it. This can be achieved by having several smaller heating elements which are switched on and off sequentially, according to the current power situation or using inverter controlled heating elements or heat pumps, which can be continuously controlled. The purpose of this work is to implement an energy management system for Meteoriihi, including demand side management by heating control, using LabView programming environment. The system is tested by using measured solar irradiation, wind speed and outside temperature of the area and the heating control supporting demand side mnagement is tested in a laboratory by using a polystyrene scale model of the building and a PWM controlled heating element.

Kaj Portin. Online mätning av gaskvalitet för gasmotorstyrning

Abstract: Tills nyligen har motorstyrning baserat på kvaliteten av gasbränslen varit svårt på grund av bristen av noggranna och snabba mätteknologier. Nya framsteg har dock möjliggjort att motoroperatörer kunnat följa gaskvalité i realtid och därmed också möjliggjort styrapplikationer baserade på detta. Detta projekt har studerat teknologier som möjliggör realtid mätning av gaskvalité och listat upp motorstyrningsapplikationer som kunde dra nytta av det.
Huvudsakliga delen av arbetet berör testandet och utvärdering av en realtid gassensor baserad på infraröd inställbart filter spektroskopi, anpassning av en ny metantalmodell och en fullständig lista på möjliga motorstyrningsapplikationer. Utöver detta ges det en kort beskrivning av en möjlig kommunikationsplan mellan gassensorn och motorns automationssystem. Slutligen simuleras en styrapplikation för att bevisa att motorstyrning baserat på gaskvalité är möjligt.
Tester med gassensorn visade att det är möjligt att mäta lätta kolväten med tillräcklig noggrannhet. Tyngre kolväten bevisade sig dock vara svårare och fler tester borde göras för att säkerhetsställa dess funktion. Metantalmodellen som användes var användbar för kvalitékalkyleringen av naturgas men dess räckvidd för andra gasblandningar kan orsaka problem i framtida undersökningar. En lista av styrapplikationer baserade på mätning av gaskvalité i realtid gjordes och deras funktioner beskrivs kort.

Esko K. Juuso. An advanced teaching scheme for control education

Abstract: Engineering education needs to provide both theoretical knowledge and problem-solving skills. A traditional organization of the courses is to give separate lectures and practical sessions. During the lectures, the theoretical aspects are covered while the practical sessions provide an improved basis for the implementation and application of the theoretical aspects. Lecturing is not the best possible method in reaching the goals mentioned above. The course is passed through an exam covering both the theoretical and practical aspects. In such a teaching scheme, learning becomes exam-oriented and the students work hard for a short period to memorize the material. This stressful work does not support deeper understanding of the topics. Students would like to have more hands-on exercises to go deeper in the topic. Computer exercises are good tools in teaching the skills. However, this approach would require extensive time to cover the curriculum.

The courses should also provide material for both theory and practice to support later research and applications. This is also essential for getting motivation for learning: there is not enough time to teach but there plenty of time to learn. Lectures provide channels for more complicated topics which cannot be sufficiently covered with interactive problem-based exercises. Learning by doing is very efficient when it is combined with lectures to provide additional material and perspectives.

The paper presents an advanced teaching scheme which aims to balance these goals by combining different teaching practices. The scheme has been gradually developed and tested in advanced control courses.

The teaching scheme combines lectures, computer exercises, case studies, seminars and reports. Problem-based learning is the key and the teachers act as project managers facilitate the learning process. In the gradually refining learning material, each teaching method has its own role. The key parts of the theory are presented in lectures. The students come across with simple implementation tasks in computer exercises and continue this work in case studies, which gradually cover more and more of the topics of the courses. Seminar tasks go deeper in selected topics and the students present their results in seminars. They also need to act as opponents in the seminar.

Full benefits can be achieved if all these are in balance. Since the lecture notes are freely available, the interest on lectures is challenging to maintain. To activate students, lecture questions have been given before each lecture. Short lecture reports have an impact on the overall evaluation.

Exercises give the students success feedback in implementing and using the methodologies in simple examples. The case study aims to get experience in building more complex solutions. This also emphasizes the fact that the interactive teaching is only a part of the learning: a considerable part of the work should be done independently or in groups.

The theory part is deepened in the seminar tasks. A fairly short presentation requires to find the essential things about the topic. Also the opponent tasks require to focus on the topic. The teacher corrects possible misunderstandings, asks additional questions and directs discussions to integrating topics. Recently, the more advanced topics have been moved from lectures to seminars when the background is already better understood after discussions. To support this, the number of seminars have been increased: originally, there was only one seminar; three topic related seminars are much better in integrating the teaching and learning process.

The courses can cover only a part of the topics which will be later needed in practice or research. Therefore, the process of learning is the most important part. Also an external lecture and demonstration have been included to give a practical example of how the course topics are used in the industrial scale.

Several short reports and presentations on specific topics are required during the course. The case study report combines the exercise part and the final report summarizes the key things and collects the student feedback. The scheme is time consuming for the teachers but it also helps in updating personal knowledge and integrating teaching and research.

The scheme has been used in teaching two advanced courses: Simulation and Computational intelligence in automation. The first version was introduced already twenty years ago when the traditional method was seen to be insufficient in reaching a usable level. The scheme has been completely renewed during the years by introducing more supporting systems. Feedback from students, the quality of reports, the level of discussions and experiences on the effects of grading have been taken into account in this development. The scheme has been found to be beneficial for overall learning progress and the students are increasingly ready to use the course material in practice.

Kim Berg, Jori Kankaanpaa, Jussi Keskikuru, Emil Koivunen, Minne Paljakka, Jannika Rintamaki, Antti Hannonen, Joel Reijonen, Jarmo Alander and Vladimir Bochko. Digital tattoo: Evolutionary computing in machine vision

Abstract: In this paper, we introduce the student project implemented in the course (Vaasa University): AUTO3120 Evolutionary computing. The project include several tasks: recognition and tracking face images, video segmentation and colorization, and fast edge detection. The task solutions are based on an evolutionary approach inspired by biological evolution. The biological aspect makes the study particularly attractive. Students using either soft programming to implement each task using Python and Matlab (for prototyping) or hardware programming using FPGA.

Esko K. Juuso. Smart Adaptive Big Data Analysis

Abstract: Models are understood as relationships between variables and used to predict of properties or behaviours of the system. Variable interactions and nonlinearities are important in extending the operation areas. Linear methodologies are sufficient in many applications. They can also be extended with principal components and semi-physical models provide a feasible solution for many applications. Nonlinearities have been handled commonly with interaction and quadratic terms. Artificial neural networks (ANNs) continue this by using more complex architectures. Knowledge-based information can be handled with fuzzy set systems and combined with neural networks. First order ordinary differential equations are solved by numerical integration and special solutions have been developed for identification. These approaches, which are also used in ANNs and fuzzy set systems, define structures for hybrid dynamic models. Local models need to be combined in complex systems. In Big Data analysis, the complex model structures are problematic since they may cause overfitting.

An alternative way is to transform the variable values to facilitate the use of the linear model structures in the analysis of interactions. This corresponds to the use of membership functions in fuzzy set systems where the interactions are handled with rules. The linguistic equation (LE) approach originates from fuzzy set systems: rule sets are replaced with equations, and meanings of the variables are handled with scaling functions. The normalization and z-scores are extended by a nonlinear scaling technique, which is needed in constructing nonlinear models with linear equations: z-scores are special cases. Constraints handling and data-based analysis improve possibilities to update the scaling functions recursively. The LE approach together with knowledge-based systems, neural networks and evolutionary computation form the computational intelligence part of the system development.

Smart adaptive systems (SAS) can be classified by three levels:
1. adaptation to a changing environment;
2. adaptation to a similar setting without explicitly being ported to it;
3. adaptation to a new or unknown application.
The smart use of intelligence by integrating specific intelligent systems is essential in the development of complex adaptive applications. Technically, an automatic black box modelling could be possible in various Big Data problems by using combinations of these methodologies. The domain expertise is an essential part in integrated solutions to understand and assess the applicability.

This paper classifies modelling methodologies and focuses on the nonlinear scaling and integrates the LE approach in developing modelling applications for complex systems. Various applications are shortly discussed.

Variable specific scaling functions are generated from data. This analysis can be parallelized since the parameters of the scaling functions are calculated independently. Also recursive updates can be obtained for the variables whose meanings need to be modified. Slight modifications, including expansions and contractions, can be done by updating the generalised norm values corresponding the parameters. Drastic changes may need a new analysis of which norms correspond to the parameters. Even then the analysis is done separately for individual variables.

Local calculations are needed to obtain features from high frequency wave-form signals. On the other hand, interpolation is needed for less frequently measured or analysed variables.

Knowledge-based information presented in natural language is translated into the same scale with all the measurement data.

All variables and features need to be scaled to the same dimensionless range [-2, 2]. Then model structures of any kind could be used in the analysis of interactions. In practice, linear interactions are sufficient which demonstrates the strength of the nonlinear scaling methodology.

Genetic tuning can be used efficiently since the whole system, including scaling functions and interactions, is defined by parameters whose feasible ranges are known.

Different level smart adaptive systems are well supported. Adaptation to a changing environment means updating the parameters. A similar setting without explicitly being ported to it can be reached by first updating the scaling functions and then modifying the interaction coefficients. Also new or unknown applications can be achieved with more drastic changes of the parameters.

The variable specific analysis combined with the compact extraction of interactions together with an unified representation of different information is clearly beneficial in Big Data Analysis.

Jukka Kortela, Antton Lahnalammi and Sirkka-Liisa Jämsä-Jounela. Advanced teaching and process monitoring environment for the factory of the future

Abstract: ABB Industrial IT Extended Automation System 800xA has been set up for centralized data collection, analysis, real-time monitoring, and advanced control. The ABB system is further expanded by cloud based advanced control and monitoring system (Fig.1). The state-of-the-art ABB setup includes OPC UA, providing a single entry point into a system. Each piece of equipment is automated using a dedicated PLC-station and different fieldbuses, diversifying the teaching and research benefits. The cloud based system will include an OPC UA client which allows access to ABB OPC Server using OPC UA Specifications. OPC UA scales very well in several directions. It allows OPC UA applications to run on embedded devices with very limited hardware resources, as well as on very powerful machines like mainframes.

The final set up will run on the high velocity web framework Play that has been especially designed for Java and Scala. Play 2 provides native Akka support (toolkit and runtime) for Play applications enabling writing highly- distributed systems in the cloud. In addition, it follows model–view–controller (MVC) software design pattern, separating the software application into three interconnected parts: models, views, and controllers. The model manages the data, logic, and rules of the application. The monitoring and optimization algorithms are implemented in the model component. In addition, OPC UA client library will be utilized for reading and writing the data from the industrial processes.

The proposed advanced teaching and process monitoring system is evaluated with test-algorithms implemented in the Spark cloud and the results are presented, analyzed, and discussed.

Susanna Mikkola, Iiro Harjunkoski and Sirkka-Liisa Jämsä-Jounela. Optimization scenarios for waste-to-energy systems

Abstract: The municipal solid waste (MSW) generation is increasing due to rapid development, population expansion and urbanization. For this reason, it is important to dispose MSW harmlessly in order to build resource savings and an environmentally friendly society, as well as to reduce pollution and to improve living environment. Available landfill space and natural resources are diminishing while the energy demand is growing. An urgent need for renewable energy utilization is in high-rise as a result of a growing international market and ambitious goals for increased material recycling.

Due to the fact MSW is considered as a renewable energy source, it can be utilized in WTE plants to produce electricity. By implementing MSW, waste reduction-, recycling- and harmlessness objectives are achieved and circular economy requirements are met with considerable economic and environmental targets. A large part of the waste is being used for energy production and optimization of this WTE process is under active research. Waste treatment technologies otherwise known as WTE technologies are able to recover energy form waste efficiently. WTE is the best waste management option for energy and environmental parameters and particularly for GHG emissions.

Circular economy standard waste management supply chain consists of waste collection, pretreatment, processing (WTE and recycling) and disposal as well as transportations between the phases. Optimizing the whole supply chain results in both economic and environmental benefits. Short-term production planning (scheduling) is required for the optimization. Typically, scheduling problems include allocation of resources, allocation of time slots, constraints and optimization. Making sure resources are timely available acquires exact planning of equipment, material, utilities and personnel.

MSW managements (MSWM) supply chain consists of household collection, transportation, treatment, material recycling, compost and waste disposal. Because disposal of MSW in landfills is not enough, the WTE processing plants are introduced to the MSWM supply chain. As a result the adjusted supply chain comprises of collection and pre-treatment, storage, transport and energy conversion. Scheduling problems relate to every phase of the supply chain. Optimization of the whole MSWM supply chain includes consideration of both economic and environmental objectives.

In the present work a generic MSWM supply chain model was developed. The generic model consists of waste collection, separation, processing and product delivery part. Data from a Mexican case study in waste management was applied for testing and demonstration. The Mexican case study is a single period optimization of the supply chain framework.

The generic model does not represent any existing MSW chain. Hence, data from the Mexican case study was used to build and test this model. The structure of the model was adopted from a Mexican case study, however, in a modified form in order to make in more relevant to Nordic countries. Testing the model is performed in parts. This means that the model is tested and showed that the results are valid when the new part of the supply chain is added to the model. With the full-scale model, different variations are tested. The testing is used to show that the model reacts logically to changes.

The model was built in GAMS program, which applies mixed integer linear programming (MILP). The model formulation includes several mass balances, constraints for transportation and processing and objective functions. The constraints involve the different phases of the MSWM supply chain such as separation, final disposal, distribution of products, transportation of waste and processing of waste to acquire value-added products. The optimization model can simultaneously select the processing facilities and technologies for the MSW collected from the different sites and the distribution of wastes to dumps, plants and markets. The cost functions relate to minimizing the costs and maximizing the profit. Potentially, other optimization targets can be used such as environmental or social objectives.

Matti Huotari, Juha Röning, Kari Määttä and Pekka Romsi. PHOTOPLETHYSMOGRAPHIC MEASUREMENTS AND ANALYSIS OF ELDERLY PEOPLE, CLINICAL PATIENTS AND THEIR CLINICAL COHORTS

Abstract: The biomedical information is increasing, however, diseases like arteriosclerosis (AS) is becoming a more common vascular disorder among elderly people. It is predicted that the negative impacts of AS on young people can be greater than on the elderly people in the long run. Arterial elasticity measurement (AEM) provides a direct indicator for cardiovascular healthiness and would predict AS. Analysis of AEM can be challenging in the cases of elderly. Accurate measurements and analysis of the blood vessel properties would be in any case important for better characterization of both arterial diseases and the development of reliable techniques. Photoplethysmography (PPG), and for its results, especially pulse wave decomposition, envelope analysis, and its second order derivative (SDPPG) could open the new health information on clinics and out-patients.

Jeremias Snellman. Optimizing Knock Detection - An analysis of sensor data on the optimization of engine knock detection

Abstract: This thesis is about engine knock and digital signal processing. The purpose of the thesis is to perform an analysis of signal data received from some test runs of Wärtsilä engines. The goal is to investigate if engines running with light or heavy knock could be distinguished better from when the engine is running under conditions without knock.

The first part of the document covers the theory about engine knock, causes and effects of it as well as knock detection and knock regulation methods. The theory about engine knock is followed by theoretical aspects about digital signal processing. Two commonly used procedures within digital signal processing are then described; discrete Fourier transform and digital filtering with belonging terms such as spectral leakage, fast Fourier transform and window functions.

The second part consists of the writings about the analysis. The software used for the analysis were DEWESoft X2 and MATLAB. DEWESoft X2 was mainly used for extracting the data from the test runs, in cycles of 50 crankshaft angle degrees, as it is implemented in the digital signal processor used by Wärtsilä’s engines. MATLAB was lat-er used for performing the FFT and comparing ratios between sensor data containing knock and sensor data without knock.

Test results showed that there could be done optimizations, in order to better distinguish when an engine is running with knock. An extra benefit from decreasing the sampling window size, would be to save computational and memory resources in UNIC.

Jarmo Alander. Digitalisaatiolla uutta yliopistoyhteistyökulttuuria / New university cooperation with digitalization

Abstract: Ajatuksia yliopistojen modernista digitaalisesta opetus- ja tutkimusyhteistyökulttuurista kohteena on soveltavan tietotekniikan kanditason opetus, jota on kaikissa yliopistoissa ja joka itsessään sisältää tietotaitoa suunnitella, toteuttaa ja kehittää moderneja oppimismenetelmiä ja erityisesti niiden vaatimaa ICT-infrastruktuuria. Kussakin yliopistossa sovellusten painotus ja profilointi vaihtelee niin, että ne täydentävät mukavasti toisiaan. Pilottia voitaisiin käyttää mallina yhteistyön laajentamisessa maisteri- ja jatko-opintotasoille sekä tutkimukseen ja edelleen myös muiden alojen opetus- ja tutkimusyhteistyön organisoinnissa.

Yrjö Majanne, Timo Yli-Fossi, Timo Korpela, Tomas Björkqvist, Maria Nurmoranta and Ville Ylä-Outinen. Analysis of dynamic performance of steam boilers

Abstract: In future energy systems, the value of flexible operation and controllability of thermal power plants will increase remarkably due to increased amount of intermittent and uncontrollable renewable power generation. Intermittent renewable generation disturbs the generation-consumption balance in power systems and at the same time, it replaces controllable thermal generation capacity. Besides that, increased asynchronous generation connected to power systems with frequency converters together with a reduced amount of rotating loads will reduce the amount of stabilizing inertia in power systems. This will make power systems more sensitive to serious disturbances in frequency stability. Thus the increased need of controllable generation capacity shall be supplied by fewer thermal power plants, at least until somebody introduces a dynamically fast and economically viable large capacity storage system for electric energy.

The topic of the proposed paper is the analysis of the dynamic performance of steam boilers. The paper will present results achieved in FLEXe- research program financed by TEKES. Performance analysis is based on simulation studies carried out with a detailed dynamic simulation model of a circulating fluidized bed boiler.

Dynamic performance of the steam boiler consists of two points; utilization of internal energy storages of the boiler, and combustion and heat transfer dynamics from furnace to water-steam cycle of the boiler. Heat energy stored in the steel mass of the evaporator riser tubes and the drum walls can be utilized to temporary boost the generated steam power from the boiler by allowing the drum pressure to drop. Pressure drop in the evaporizer drops evaporation temperature of the saturated water producing extra steam. The latent heat of vaporization is extracted from evaporator tubes. This extra steam will help the boiler to respond the first seconds of fast load ramping before the heat power from furnace to water-steam system will be changed. This is very important for frequency control in future energy systems. However, utilization of internal energy storages of the boiler causes temperature fluctuations in boiler structures. Excessive thermal stressing will reduce the expected lifetime of boiler structures. Especially sensitive structures are thick walled components such as the drum and steam headers collecting steam flows from individual evaporator and superheater tubes. For this reason it is very important to limit the temperature fluctuations in boiler components to tolerable level.

The proposed paper will introduce the simulation results of the storage dynamics of the circulating fluidized bed boiler, and how the utilization of these storages will effect on temperatures of different boiler components. This behavior is also reflected to the operation of steam turbine. The analysis points out the limiting bottlenecks when maximizing the load change rate of a steam boiler. Also effects of utilization of energy storages to boiler controls is discussed.

Johnny Backlund. Vision System for Online Defect-Detection - An evaluation of methods for defect-detection for KWH Mirka’s abrasives

Abstract: Mirka Ltd has machines for punching out disks from their abrasive sheets. The sheets have gone through many processing stages during its fabrication, and as a result they contain different kinds of defects. Sheets have been joined several times with different techniques, the grip cloth has been joined, and test pieces have been carved out. These kinds of defects are made on purpose and are usually already marked, but not always. The abrasive roll may also contain unintended defects such as folds or stains.

The goal of this thesis is to develop a complete system, and also evaluate different methods, for detecting defects in Mirka Ltd.'s abrasive sheets. An Omron FZ5 vision system has been installed at the machine and the first goal is to configure this system so that it detects the most common defects. This system should perform according to the requirements set by Mirka Ltd and the environment at the machine. This part of the work shall also give an idea of the suitability of the Omron-library for this task.

In an attempt to complete the short comings in the Omron-library, another method will also be tested. An algorithm based on features in frequency space will be tested with Matlab. Fuzzy membership functions are used to compare measured values against values from faultless material.

The result is a complete system for detecting the most common defects, along with an evaluation of some more advanced method’s suitability for defect detection in the pattern on abrasive sheets. This is all summarized to a recommendation on upgrades if more advanced measurements are to be done in the future.

Alice Bittante, Marketa Mikolajkova, Henrik Saxen and Jonas Waller. Optimization of LNG and natural gas supply chains – two models

Abstract: Natural gas is gaining popularity for the production of power and heat because of its lower environmental impact compared to coal and oil. The growing maritime distribution in the form of liquefied natural gas (LNG) will also make natural gas available at sites without connections to existing pipelines, and the production of biogas is also increasing. In order to study the best way of incorporating such gas sources in the energy systems, gas delivery problems have been tackled by mathematical modelling and optimization. The paper presents two problems, both formulated as cost optimization tasks, and illustrates how they are solved by the proposed approaches .

The first is concerned with gas deliveries as LNG from a set of supply terminals by a fleet of ships to a number of satellite terminals, where the gas is consumed or transported onwards by truck to other consumers. The solution pursued is the one corresponding to minimum overall costs, and the problem is tackled by mixed integer linear programming (MILP), considering the price of LNG, the investment costs of terminals and trucks, the renting and operation costs of the ships, as well as the costs of alternative fuel used to satisfy the energy demand of the consumers not supplied by LNG. The results give the optimal ship routes and quantities delivered to the satellite terminals, the size of the satellite terminals, the number of trucks, their routes and the quantities of LNG for the in-land supply, as well as the quantities of alternative fuel used. The model developed is illustrated by a case study in the Gulf of Bothnia, considering three supply terminals, seven potential satellite terminals and gas consumers in both Finland and Sweden. The results illustrate how the price difference between the alternative fuel and LNG changes the optimal supply chain.

The second task focuses on the development of energy supply to an in-land region, where a gas network connected to an existing LNG terminal and/or biogas production sites can be built. A mathematical model of the pipeline system, with optional routes, supply and demand, is developed, further considering the possibility to supply the consumers with LNG by trucks from the terminal or by an alternative fuel. The cost-minimization task is a mixed integer nonlinear programming problem, where the nonlinearities arise in the equations for the pressure loss in the pipe and compression of the gas. To make the problem more easily solved, the equations are linearized, yielding an MILP problem. The model is illustrated by a fictitious case inspired by a region in Finland with one LNG terminal, two biogas production sites and consumers, including industrial sites and a power plant supplying electricity and heat to a society. The optimal supply chain is illustrated under different cost structure of the fuels and under different constraints, demonstrating that the model is a versatile tool for studying future energy supply systems.