KTP Associates Conference digest 2014

Page 1

KTP Associates Conference Huxley Building, University of Brighton Thursday 19 June 2014

Conference Digest Editor: Dr Shona Campbell | ISBN 978-1-905593-96-5


s KTP Associates Conference

19 June 2014

A one-day conference organised by the University of Brighton KTP Centre

KTP Associates Conference 2014


s KTP Associates Conference Huxley Building, University of Brighton Thursday 19 June 2014 Time

Morning session

9.30

Registration and refreshments Location: Huxley Building, Lecture Theatre

10.00

Welcome and opening remarks Shona Campbell, KTP Manager, University of Brighton Location: Huxley Building, Lecture Theatre

10.15

Keynote address Jugal Desai, former KTP Associate, Dando Drilling International Ltd A partnership between my career dream and KTP Location: Huxley Building, Lecture Theatre

10.45

Paper presentations: Engineering Design Introduction by Session Chair: Derek Covill, Senior Lecturer, University of Brighton Location: Huxley Building, Lecture Theatre

10.50

Design and optimisation of counter blow forging process by finite element analysis, Chandragupt Gorwade, KTP Associate (Sheffield Hallam University)

11.10

A framework/performance for new product design ball valve, Catalin Pruncu KTP Associate (University of Birmingham)

11.30

Implementation of a High Throughput Computing System for Engineering Design Optimisation: A Case Study of a Turbocharger Centrifugal Compressor Aerodynamic MultiObjective Optimisation, Osarobo Famous Okhuahesogie, KTP Associate (University of Lincoln)

11.50

Development of new generation of smart prosthetic liners, Ana Gallego, KTP Associate (TWI Ltd)

12.10

2 minute ‘Elevator Pitches’ from all KTP Associates presenting posters (see overleaf) Location: Huxley Building, Lecture Theatre

13.00

Lunch moving straight into poster session Location: Huxley Lobby

Please turn overleaf for details of the afternoon sessions


Time

Afternoon session

From 13.00

Poster session Location: Huxley Building, room 400 R&D of Leak Repair Additives for automotive cooling systems, Nick Applin, KTP Associate (University of Brighton) Optimising Temperature Measurement Accurately in Solid Oxide Fuel Cell Farzad Barari, KTP Associate (University of Brighton) Reassessing the Design Criteria for Offshore Platforms Using Climate Model Data Ray Bell, KTP Associate (University of Reading) Sequential analysis of sickness absence Miguel Belmonte, KTP Associate (Lancaster University) Putting a Price on People's Futures , Jemma Bridgeman, KTP Associate (London South Bank University) Can corporations ensure fair wages, safe working practices and environmental restoration? Rose Dunne, KTP Associate (Birmingham City University) Creating Safer Swimming Pools Utilising European Initiatives, Sam Eckton, KTP Associate (University of Brighton) Airborne Radar and the Environment: New Solutions to Old Problems? Chris Haworth, KTP Associate (The University of Edinburgh) Developing a comprehensive assessment, monitoring and intervention package for people with Alcohol-Related Brain Damage (ARBD) living in a residential rehabilitation facility in Glasgow, Lindsay Horton, KTP Associate (University of the West of Scotland) Embedding behaviour change in the community to foster energy efficiency improvements in the private residential sector, Jesus Garcia, KTP Associate (Birmingham City University) Retail Consortium Website Development Emma Kerr, KTP Associate (University of Cumbria) Improving the Accuracy of Point of Care HbA1c Monitoring Devices for Diabetes, Jennifer Peed, KTP Associate (University of Bath) A collaborative approach to address the benefits and concerns with NHS Medical Records Databases, Nicola Pether, KTP Associate (University of Leeds) Tackling the Role of Pigmentation in Ageing Skin – Developing a Novel Methodology, Anne Oyewole, KTP Associate Product personalization software for SME's, Bora Shkurti, KTP Associate (University of Bolton) Intelligent Automated Test System Development Neil Smiley, KTP Associate (Bournemouth University) Utilising Herb Waste Rebecca Smith, KTP Associate (University of Reading)


Co-producing Mental Health Peer Support in Oxleas NHS Foundation Trust, Victoria Stirrup, KTP Associate (Canterbury Christ Church University) Developing a New General Practice service delivery model informed by data mining to predict population need and consequently redesign processes and workforce focus Alex Tang, KTP Associate (Northumbria University) Paper presentations: Operations Management and Retail Session chair: Paul Levy, Senior Lecturer, University of Brighton Location: Huxley Building

Paper presentations: Chemistry, Engineering and the Environment Session chair: Graeme Awcock, Principal Lecturer, University of Brighton Location: Huxley Building

14.00

The design and implementation of a bespoke Enterprise Resource Planning system (ERP) for an acoustical engineering company, Betrand Ugorji, KTP Associate (University of Hertfordshire)

The Role of Research and Development in Developing New Products within a Medium Sized Analytical Manufacturer Jen Hulse, KTP Associate (The Open University)

14.35

Optimal Scheduling and Rostering of Flexible Staff at Container Port Terminals: A Case Study of the Port of Felixstowe, Ali Rais Shaghaghi, KTP Associate (University of Essex)

Oil concentration range extension of an oil in water instrument based on nephelometry, Benoit Oger, KTP Associate (University of Brighton)

14.55

Branding your Local Charity Shop: A Case Study of St Wilfrid’s Hospice Shops Innovative Branding Strategy Implementation, Veronica Malley, KTP Associate St Wilfrid’s Hospice (University of Brighton)

Integrating climate research and extreme event simulation into the assessment of weatherrelated hazards, Alison Cobb, KTP Associate (University of Reading)

15.15

Can corporations ensure fair wages, safe working practices and environmental restoration?, Rose Dunne, KTP Associate Hockley Mint (Birmingham City University)

The effects of material variations in high value manufacturing industries, Akbar Jamshidi, KTP Associate (University of Bath)

15.45

Best paper and poster awards: Dr Terry Corner, KTP Adviser

16.00

Summing up and formal close of conference: Dr Alasdair Cameron, Vice Chair, KTP National Forum (likely to end at 16.15 and no later than 16.30)


s

KTP Associates Conference Thursday 19 June 2014

Huxley Building, University of Brighton Page

Oral presentation papers A framework/performance for new product design ball valve, Catalin Pruncu KTP Associate (University of Birmingham)

3

Implementation of a High Throughput Computing System for Engineering Design Optimisation: A Case Study of a Turbocharger Centrifugal Compressor Aerodynamic Multi-Objective Optimisation, Osarobo Famous Okhuahesogie, KTP Associate (University of Lincoln)

11

Development of new generation of smart prosthetic liners, Ana Gallego, KTP Associate (TWI Ltd)

25

The design and implementation of a bespoke Enterprise Resource Planning system (ERP) for an acoustical engineering company, Bertrand Ugorji, KTP Associate (University of Hertfordshire)

37

Optimal Scheduling and Rostering of Flexible Staff at Container Port Terminals: A Case Study of the Port of Felixstowe, Ali Rais Shaghaghi, KTP Associate (University of Essex)

47

Branding your Local Charity Shop: A Case Study of St Wilfrid’s Hospice Shops Innovative Branding Strategy Implementation, Veronica Malley, KTP Associate St Wilfrid’s Hospice (University of Brighton)

57

Can corporations ensure fair wages, safe working practices and environmental restoration?, Rose Dunn, KTP Associate Hockley Mint (Birmingham City University)

65

The use of Complimentary Stationary Phases and 2-Dimensional HPLC for the Separation of the Synthesis and Degradants of Tipredane (INN), Jen Hulse, KTP Associate (The Open University)

71

Oil concentration range extension of an oil in water instrument based on nephelometry, Benoit Oger, KTP Associate (University of Brighton)

77

Integrating climate research and extreme event simulation into the assessment of weather-related hazards, Alison Cobb, KTP Associate (University of Reading)

89

The effects of material variations in high value manufacturing industries, Akbar Jamshidi, KTP Associate (University of Bath)

97

KTP Associates Conference 2014

1


s

Oral Paper Presentations

KTP Associates Conference 2014

2


A framework/performance for new product design ball valve Catalin I. Pruncu1,2, Karl D. Dearn1*, Tony Hill2, Robert Watson2 1

Mechanical Engineering, University of Birmingham, B15 2TT, Edgbaston, Birmingham, United Kingdom 2 Truflo Marine Ltd, Westwood Road, Birmingham, B6 7JF, United Kingdom *Corresponding author: k.d.dearn@bham.ac.uk

Abstract The

requirement

product

The strategy that we propose for successful

performance might occur when a robust

performance in this action is based on the

knowledge transfer agreement is developed

continuum

interrelation

between

experience

and

host

for

“maximum”

University

and

Industrial

between

applied

industrial

research,

within

Company. A perspective of this agreement is

possibility to understand the route of product

related to the University of Birmingham and

from

Truflo

demands.

Marine

Ltd.,

administered

by

the

laboratory

tests

to

the

customers’

Technology Strategy Board Driving Innovation. The present paper describes the framework

Key words: knowledge transfer, tribological

that entails a new design capability for a smart

condition, Finite Element Methods (FEM)

nuclear power plant valve assembly. So, it defines the path to be pursued for assessment

1. Introduction

and optimization of ball valve equipment. The

The

proposed component must ensure reliable

cost [1].

and water with different pH). The valve must hammer,

A

over

particular

configuration

of

valve

is

referred to the ball valve assembly. The

pressurization of the systems and damage

principal components of this assembly are

flow in the structure components.

KTP Associates Conference 2014

valve

represents upwards of 5 percent of the total

pressure; low, very low or high temperature, water

the

forth), is recognized to be vibrant because it

conditions (i.e. severe service life as high

well;

of

(i.e. pumps, turbines, boilers, vessels, and so

wear and failure damage within different

as

importance

components entailed in the structure system

capacities of surface interaction in terms of

handle

economic

classified as: a valve chamber; a ball or a

3


generally spherical member positioned with

2. Materials

the valve chamber; and one or two seat

Materials for body and ball valve

members positioned between the ball and the

The body valve assembly could be designed

respective ends of the valve chamber.

from 316 Stainless Steel (EN 10272:2007),

It is well known that the ball valve enables

conforming

a precise handling with only a strength-through

factor.

the ball is rotated at 90 degrees to the

It

has

to

meet

the

fire

safety

additionally be fitted with an environmental

friction coefficient, pressure oscillation and

seal, protecting it from corrosion attack. In

high torque) into a sealing mechanism that

general, the ball can be prepared from the

allows damage accumulation and catastrophic

same materials as the body; it can have the

failure.

mechanism that works on the floating ball

In recent years, a significant volume of dedicated

and

The valve’s body end connector thread may

through system meets severe conditions (i.e.

have

ratings

requirements of API 607 and BS6755 Part 2.

direction of flow, the internal flow path created

papers

ANSI/ASME

featuring a 4:1 pressure boundary safety

flow valve that allows a barrier to flow when

published

to

principle, with dynamic response seats and an

their

anti-blow-out stem.

research attention to the development of better configurations for the sealing mechanism of a ball valve, in order to avoid the issue

Materials for valve seats

mentioned

PEEK (Polyether ether ketone) seat material

earlier.

They

have

focussed and

may be recommended for the seat’s material

experimental behaviour of the ball valve [2], to

application to room temperature, because of

design the functionality and shape of seat

its good characteristics in terms of friction and

members [3] [4] [5] [6], the body of the valve

hardness. This material allows pressure of the

configuration [7] and to detect the valve

order of 10,000psi that entails the valve to

performance

operate over a wide temperature range from

attention

on

predicting

and

the

numerical

flow

patterns

[8].

0

to

0

+232 C

−54 C

tribological parameters that enable the optimal

Alternatively,

performance of the valve assembly. So, they

specified PTFE seals for operations at −54 C

ignored the characteristics of ball surface (i.e.

to +204 C (−65 to +400 F), at pressures of

hardness and roughness), seat surface and

6,000psi

even body surface, whichplay a vital role in

mechanism enables the seal assembly to be

prediction of good functionality and number of

installed and removed without the need to

life cycles under operational conditions.

detach the valve from the target platforms. For

The main purpose of this study is to provide a

example, the seal assembly may include a

flow strategy to develop a framework that

seal

entails new design capability for a smart

WO/2011/137008) disposed within the valve

nuclear power plant valve assembly that

interior and biased towards the ball element of

overcomes the existing problems.

the valve [10]. A disadvantage of using

the

(−65

0

Unfortunately, they neglected the aspect of

to

seat valves

+450 F). may

have 0

0

ring

0

(414bar)

(see

[9].

details

A

suitable

44

on

seal

patent

thermoplastic materials for producing the seal

KTP Associates Conference 2014

4


for an industrial valve of the type discussed

Further

here is the very low working tolerances and/or

improvement to Ra 0.2 um, can provide an

misalignment

extremely fine mirror finish for the product,

in

assembly

of

the

valve

components, an issue which is pronounced

mechanical

or

electro-polishing

including the handle [15].

when low surface roughness of the ball is encountered.

3. Settlement process

Besides, a recent application [11] proposed a

Nowadays,

new mechanism to avoid the disadvantage of

obvious, in certain circumstances the research

seal system. Furthermore, it is expected that

achieved in University laboratories may not fit

these type of valves utilise live-loaded seats

very well with industrial requirements. By

that compensate for wear and temperature

creating

cycling to ensure leak-tight operation over their

vs. Industrial area). So, the key point of this platform is to focus on

a circumferential groove near the circle of

getting maximum results and improving the

contact on the ball with a surrounding seat can High

differential

forces

learning performance by an exchange of

are

academic

transmitted beyond the circle of contact so that

capabilities

transferred

into

a

manufacturing process. The main steps for

deflection due to applied differential pressures

project planning that sustain the path to be

results in flexing in the groove area while

pursued to accomplish and to confirm the goal

maintaining full circumferential seat contact to

of this framework for designing improved

prevent leakage [14].

performance ball valve are structured as a

The severe service life of standard metal seats

cost, reliability condition and requirements

claimed for nuclear valve application can be

standards

improved with new technologies that allow the

of

national

and

international

authority. The general steps are:

use of Hardide Coatings’ tungsten carbide-

-To reduce the processes, the boundary

based coating. This enables the assembly, for

statement of the new product should be

use in severe service applications, that require

based on the proficiencies of the existing

metal to metal seating, including abrasive and

product solution delivered by the industrial

slurry applications, conforming with ANSI 150-

partner, so far. A robust enhancement can

O

1500 and cryogenic to 800 F maximum

be acquired by corroborating this statement

temperature applications.

with a literature review (i.e. journal papers,

The surface finish for the design developed

patents and Standards DIN, ISO, BS...);

imposes a roughness surface inside of Ra 0.5

-

um and outside of Ra 0.8 um as standard.

KTP Associates Conference 2014

Knowledge

requirements from both sides (i.e. University

high

differential pressures without leakage features,

inserted.

as

to share and exchange the information and

leakage, and improved cycle life also can be

be

“know-how”

access is established, with great possibilities

and end connection seals to prevent shell withstanding

the

Transfer Partnership programmes an open

entire pressure range [12]. It features stem

For

is

human life. Although this development is

seal system and a ball valve comprising said

[13].

development

recognized as an important tool to progress

thermoplastic materials that enclose both a

considered

technological

A

Finite

Element

Methods

(FEM)

approach introduced into the processes

5


structure may lead to overcoming the

robustness in terms of material behaviour as

issues related to the quantity of raw

well as structural integrity of surface features.

material

new

It is recognized that during the last decade

product, an issue of vital importance. As

significant improvement and optimization of

well, the success of this technique may

the seat valve was achieved. Although, a high

improve the grade of knowledge based

level of integrity was accomplished, limitations

within the functionality of the assembly, in

in the prediction of mechanical behaviour of

particular

critical

valve systems are obvious. This limitation

contact condition for tribological effect of

derives from poor reliability of the preloaded

ball/seat/body valve interaction;

system, when the “spring model” [5] (see

-An experimental tribological stand (i.e. pin

Figure 1a) is used or unsatisfactory accuracy

on disc tests, fretting wear test rig) can

in behaviour estimation due to the multiply ring

achieve the validation of FEM results;

that involves uncertainties in clearance (see

-Then the laboratory results are transferred

Figure 1b) for predicting the real friction

to the workshop-floor of the Industrial

coefficient and torque control.

spend

to

for

designing

detect/predict

the

a

partner where the final product will be made; -The

product

validated

obtained

internally

is

prior

tested to

and

external

certification being achieved; -The commercial statement will be acquired by marketing and sale procedures. This platform will provide, for the academic

a)

partner, better insight into the requirement imposed by the industrial market and issues covered by the product specification when it is generated, Not only at the “sample” level. On the other hand, a successful project will enable Truflo Marine Limited to increase their market share within the nuclear, marine processing industry. Therefore the development of cost effective processes for this type of assembly is

b)

a key step in the growth of their business within this market sector.

Figure 1. Models of seat assembly of ball valve for high temperature as a) single [5] and

4. Model of seat embedded

b) multicomponent [3]

and ball surface The design target of the seat valve involved

Besides, these investigations are only based

within a nuclear power plant must ensure

on the mechanical functionality of the system

KTP Associates Conference 2014

6


without

clear

surface

entailed only mechanical treatment as surface

topography; evidence about manufacturing

finish, for example grinding, polishing and so

processes of the ball valve are totally missed.

on) submitted to the mechanical loading and

The incipient assessment performed in the

the deformation of coated ball, so that during

laboratory provides us with enough evidence

mechanical loading damage can also occur in

to confirm the issue mentioned earlier. For

the substrate layer.

example, the map of surface topography

An implicit stand for testing this compound is

achieved

from

information

using

3D

presented in figure 3a. The experimental

recognize

the

results may confirm if the material surface

necessity of improving the surface condition.

undergoes elastic limit when the mechanical

Figure

properties remain as initial state for tribological

profilometry 2,

a

allows

ball

of

us

emphasizes

valve to

different

surface

statements.

condition or the material surface yields the elasto-plastic deflection

limit

occurs,

and and

implicit

surface

obviously

the

tribological condition between ball seats will change considerably. An approximate solution

a)

can

be

acquired

performing

SolidWorks

simulation, as is emphasized in Figure 3b.

b) a)

c)

Figure 2. Maps of ball topography, a) good coating treatment, b) weld defect during operation and c) chatter mark

b)

Figure 3. a)Testing set-up for experimental routine, b) numerical analysis solutions

5. Mechanics of ball valve Integrity of balls is another critical feature of

Conclusions

valve structure. Two key points must be considered when prediction of the tribological

This paper presents evidence to support a

condition and damage flow is required: the

general methodology to generate a new

grade of deformation of virgin material surface

framework for new product development. A

of ball (i.e. because it can be manufactured

brief summary of the materials and design

and processed without chemical treatment and

KTP Associates Conference 2014

7


patents employed for valve assembly were presented. In

addition,

embedded,

a

successful

based

on

routine

the

was

continuum

interrelation between industrial experience and applied research, offering the possibility of understanding

the

route

of

product

development from laboratory tests to meeting customers’

KTP Associates Conference 2014

demands.

8


Reference 1. J. Phillip Ellenberger. Piping and Pipeline Calculations Manual (Second Edition). ISBN: 978-0-12-416747-6, 2014, Pages 281–299. 2. Xue-Guan SONG, Seung-Gyu KIM, Seok-Heum BAEK, Young-Chul PARK. Structural optimization for ball valve made of CF8M stainless steel. Transactions of Nonferrous Metals Society of China, Volume 19, Supplement 1, September 2009, Pages s258–s261. 3. Patent: US 8, 424,841 B2, Multi-Component Metal Seat Design For Ball Valves. Apr. 23, 2013. 4. Patent: A1, US 2013/0299730, Valve Seat For A Ball Valve. NOV. 14, 2013. 5. Patent Number: 5, 088,687, Ball valve seat for high temperature service. Feb. 18, 1992. 6. Xue-Guan SONG, Lin WANG, Young-Chul PARK. Analysis and optimization of nitrile butadiene rubber. Transactions of Nonferrous Metals Society of China, Volume 19, Supplement 1, September 2009, Pages s220– s224. 7. S. Bagherifard. Fernández Pariente, M. Guagliano. Failure analysis of a large ball valve for pipe-lines. Engineering Failure Analysis 32 (2013) 167–177. 8. Ming-Jyh Chern, Chin-Cheng Wang, Chen-Hsuan Ma. Performance test and flow visualization of ball valve. Experimental Thermal and Fluid Science 31 (2007) 505–512. 9. Two-piece ball valve for high-pressure applications. World Pumps, Volume 2003, Issue 442, July 2003, Pages 6. 10. Patent number: WO/2011/137008, Seal assembly for a rotary ball valve. Sealing Technology, Volume 2012, Issue 5, May 2012, Pages 12. 11. Patent number: WO/2011/033536, Seal system for industrial valves – particularly ball valves. Sealing Technology, Volume 2011, Issue 8, August 2011, Pages 14–15. 12. Ball valves maintain zero leakage over extended cycle life. Sealing Technology, Volume 2003, Issue 7, July 2003, Pages 3. 13. Ball valve offers more consistent seal. World Pumps, Volume 2009, Issue 9, September 2009, Pages 12. 14. High pressure ball valve seal assembly. Publication number: WO/2008/039198, Sealing Technology, Volume 2008, Issue 7, July 2008, Pages 15. 15. Ball Valves UK launches ultra-clean ball valves. World Pumps, Volume 2005, Issue 469, October 2005, Pages 6.

KTP Associates Conference 2014

9


KTP Associates Conference 2014

10


Implementation of a High Throughput Computing System for Engineering Design Optimisation: A Case Study of a Turbocharger Centrifugal Compressor Aerodynamic Multi-Objective Optimisation O. F. Okhuahesogie*,1,2, M. J. W. Riley2, F.J.G. Heyes1, P. Roach1 *KTP Associate (Analytical Engineer) and Corresponding author 1 Napier Turbochargers Ltd, UK 2 School of Engineering, University of Lincoln, UK

ABSTRACT This paper presents the implementation of a High Throughput Computing (HTC) system based on HTCondor for multi-objective optimisation of engineering problems as part of a Knowledge Transfer Partnership (KTP) between the University of Lincoln and Napier Turbochargers Ltd. The current study focused on aerodynamic optimisation of a turbocharger centrifugal compressor impeller and vane-less diffuser of the high pressure (HP) stage of a twostage turbocharger. The high cost of commercial Computer Aided Engineering (CAE) tools and the need for multiple licenses of these tools for cluster computing meant that free or open-source or custom software codes with unlimited licensing was a mandatory requirement for this work. The HTC system discussed in this paper consists of a parametric CAD program (RC_CAD) that defines a compressor impeller and vane-less diffuser geometry with 23 variables developed by the author, an in-house meshing program by Napier Turbochargers Ltd, a commercial flow solver (TBLOCK), a modified version of a Differential Evolution Multi-Objective (DEMO) algorithm provided by the University of Lincoln, a mapping program also developed by the author for calculating the 8 objective functions and extracting performance data, job management software (HTCondor) for distributing jobs on the Linux desktop cluster and custom python scripts that bundle all the components together. All the codes developed do not require installation and are command line driven with each executable less than 1.3MB, hence making them suitable for deployment in a HTCondor pool of desktop machines. The HTC system features are presented and a preliminary result which shows a substantial gain of 2% in peak efficiency from the current turbocharger compressor design. Keywords: Turbocharger Centrifugal Compressor, Computer Aided Engineering, Differential Evolution, Multiobjective Optimisation, HTCondor.

1. Introduction Turbochargers are widely used in the automotive, marine, power generation and rail industries. A turbocharger consists of a compressor (usually centrifugal) powered by a turbine (axial or radial) driven by an engine’s exhaust gases [1]. The design of turbomachinery usually starts with a simplified 1-D model [2] before a more accurate 3-D simulation is carried out using CAE techniques such as Computational Fluid Dynamics (CFD) and Computational Structural Mechanics (CSM). CFD and CSM have become an integral part of most

KTP Associates Conference 2014

engineering design processes in the last couple of decades and will remain so for the foreseeable future. This paper presents an automatic CFD design methodology for the design of the high pressure stage of a two-stage turbocharger compressor for marine applications. ____________________________________________ FOkhuahesogie@lincoln.ac.uk (Osarobo Famous Okhuahesogie*) Osarobo.okhuahesogie@napier-turbochargers.com (O F. Okhuahesogie*) MRiley@post01.lincoln.ac.uk (Mike Riley2) Frank.heyes@napier-turbochargers.com (Frank Heyes1) Paul.roach@napier-turbochargers.com (Paul Roach1) Paper presented at the KTP Associates Conference, Brighton, UK, 19 June 2014

11


2. Motivation Turbochargers made by Napier Turbochargers Ltd are used in the marine industry and are subject to stringent International Maritime Organisation (IMO) legislation that require the reduction of NOx emissions to about 20% of pre-2011 levels by 2016. The use of high pressure-ratio turbochargers is a key technology that is able to meet this target [1]. In addition, the increasing demand by engine manufacturers for turbomachines with higher efficiency, pressure-ratio or mass flow will require multiple runs of CFD/CSM simulations during turbomachinery blade design [3]. This higher performance should be delivered at a lower cost and within a short time frame so that medium size businesses like Napier can remain competitive in the market place. The objective of this project was to implement a fast, cheap and accurate design methodology for automatic multi-objective engineering optimisation problems. The current study is focused on a centrifugal compressor aerodynamic multi-objective optimisation for use in the HP stage of a two stage turbocharger. A vaneless diffuser compressor is used because it has a wide operating map which is particularly significant during transient operations or rapid acceleration when the marine engine will require more air.

3. Choice of CAE and Computing Tools Every CFD/CSM design process starts with the creation of the physical geometry using Computer Aided Design (CAD) software and the physical geometry is discretised/meshed into smaller cells using meshing software and then solved using CFD/CSM software. The final process is the analysis of results using post-processing software. For this project, it was decided to use free or open-source or custom software with unlimited licensing to minimize cost and also for easy deployment on a cluster of desktop computers running HTCondor. Gmsh [4] and SALOME [5] were considered for use as CAD and/or meshing programs respectively, while TBLOCK [6], OpenFOAM [7] and Turbostream [8] were considered for use as the solver. Turbostream is a re-implementation of TBLOCK on GPU’s and is optimised for speed. However, the budget for this project could not afford its license and was ruled out. OpenFOAM is a widely used open-source CFD solver with a lot of online support. However, a turbomachinery specific solver is not part of the standard solvers supplied with the

KTP Associates Conference 2014

publicly available version of OpenFOAM, so it was ruled out also. TBLOCK is a widely used and reliable turbomachinery specific software, but it is not free. However, it was already used by Napier so is considerably cheaper than other commercial CFD solvers. For this reason, TBLOCK was chosen as the solver. The choice of CFD solver then influenced the choice of CAD and meshing programs. Since an in-house meshing program that is compatible with TBLOCK already exists, it was chosen for the meshing task. The in-house meshing program reads in CAD files in .curve format specific to ANSYS BladeGen and ANSYS TurboGrid. However, ANSYS BladeGen and TurboGrid are expensive and limited by licences and were not a feasible option for the CAD stage of the CAE process. Hence, there was a need to develop a CAD program to output .curve file format and complete the design loop. The next task was to link up the CAD, meshing and CFD programs to run automatically and independently. A mapping program was developed to perform this task. The mapping program also collates target objective functions from each CFD run of a compressor CAD model for DEMO to evolve the CAD model. DEMO is a multi-objective implementation of Differential Evolution (DE) by Storn and Price [9]. At this point, the multi-objective optimisation system can run locally on one machine. The next challenge was to deploy the system on a cheap cluster to speed up the calculations. “Cheap cluster” in this context means using already available desktop computers for computationally intensive tasks when they are idle. HTCondor [10] job management software was chosen to be deployed on the clusters to help manage job distribution.

3.1. CAD Program The CAD program (RC_CAD) developed for the project is a parameterised CAD program for creating centrifugal compressor impeller and vane-less diffuser suitable for use in an automatic multiobjective optimisation design process using genetic algorithms or evolutionary algorithms. The program was written in C++ using object oriented design and memory management techniques [11] for extensibility and speed (creates geometries in less than 2 seconds on an Intel Core i5, 2.5GHz, 8GB RAM laptop computer). Geometric features of CAD models created can be visualized using the free Matplotlib library [12] called from custom python scripts or using Paraview [13]. The geometry of the impeller including splitter blades and vane-less diffuser are defined using 23 parameters in a file named ‘optParam_in.dat’. Figure 1 shows the hub and shroud curve parameterisation using 5 Bezier control points each. The direction of movement of

12


the Bezier control points are indicated by arrows. Sample plots of the CAD program output are shown in figures 2 to 6. Figure 2 is a meridional profile of the impeller blade and diffuser channel. The red line shows the location of the splitter blade’s leading edge. Figure 3 is a front view of the impeller blades defined as ruled surface of straight lines joining hub and shroud contours which are equidistant along the meridional channel of the impeller. Figure 4 is a 3-D view of an impeller. Blade angle (beta) and wrap angle (theta) distribution along the meridional profile is illustrated in figures 5 and 6 respectively.

Fig. 3. Front view of impeller blades showing 11 profiles points distribution (viewed in Paraview)

Fig. 1. Meridional parameterisation of impeller and diffuser Fig. 4. 3D CAD model visualised in ANSYS TurboGrid

Fig. 5. Beta angle distribution along the impeller meridional length (Plotted using python and matplotlib) Fig. 2. Meridional View of impeller with inlet and diffuser (Plotted using python and matplotlib)

KTP Associates Conference 2014

13


3.3. CFD Solver

Fig. 6. Theta angle distribution along the impeller meridional length (Plotted using python and matplotlib)

3.2. Meshing Program The meshing program used in the design process was written in FORTRAN by Mr Frank Heyes at Napier Turbochargers. The program creates blocks of structured grid in the inlet, flow passages and diffuser of the compressor geometry defined by .curve file produced by the CAD program. It takes less than 4 seconds on an Intel Core i5, 2.5GHz, 8GB RAM laptop computer to create a mesh of 300,000 cells. View of a mesh created is shown in figures 7 and 8.

TBLOCK is a multiblock flow solver aimed at predicting both main blade path and secondary gas path flows in turbomachinery, although it is also applicable to many other types of flow. The program is written in FORTRAN 77 and the development started in the early 70's by Prof. John Denton. The code is now widely used in academia and industry to design different types of turbomachines. It solves the Reynolds-averaged Navier-Stokes equations on multi-block structured grids of the inlet, passages and diffuser of a centrifugal compressor. To facilitate use for turbomachinery blades the program is written in cylindrical (đ?‘Ľ, đ?‘&#x;đ?œƒ, đ?‘&#x;) coordinates. The coordinates and I,J,K directions must satisfy the following convention as illustrated in figure 9. The geometry of every block must be cuboid with 6 faces and 12 edges as illustrated in figure 10.

Fig. 9. Coordinates and grid indices in TBLOCK

Fig. 7. Fluid region mesh visualised in paraview (Meshed using in-house code) Fig. 10.Block and structured grid example for TBLOCK

3.4. DEMO

Fig. 8. Blade and surface mesh created in ANSYS Turbogrid

KTP Associates Conference 2014

A modified version [14] of DEMO for batch computing was used in this project to generate candidate geometries (CAD models) in batches so that HTCondor can dispatch multiple jobs to the cluster of 8 desktop machines running Linux CentOS 6.4. DEMO is a multi-objective implementation of Differential Evolution (DE) by Storn and Price [9]. It combines the advantages of differential evolution with the mechanisms of Pareto-based ranking and

14


crowding distance sorting used by state-of-the-art evolutionary algorithms for multi-objective optimisation [15]. Differential Evolution [9] has attained much popularity in the optimisation community. The variant used here, DEMO, was found to have comparable performance to other state-of-the-art optimisation algorithms [16].

3.5.Mapping Program A mapping program was written in C++ to call the CAD and meshing programs and then run the flow solver for different operating conditions to produce performance data of each CAD model over a range of mass flows at constant rotor speed. The mapping program then calculates 8 objective functions based on the 23 CAD input parameters for DEMO to evolve new CAD models to meet the target objectives. The mapping algorithm is illustrated in figure 11 where the red star is the target design point and points 1 to 6 are operating points.

Fig. 11. Illustration of the mapping algorithm

prediction is started using the solution from point 1. In this case the outlet pressure is increased until surge is predicted. The original percentage increase in outlet pressure is reduced by the program during surge prediction and is defined as a function of percentage drop in mass flow from the previous point (i.e. as the slope of the speed line reduces towards surge, smaller increase in outlet pressure is used to calculate the next point). The eight objectives for the optimisation are described using figure 11 and assuming the peak efficiency on the speed line is at operating point 2: 1. Reduce distance from 1 to star (in mass flow) 2. Reduce distance from blue line to star (in pressure) 3. Reduce distance from 1 to 2 (in mass flow) 4. Reduce distance from 1 to 2 (in pressure) 5. Centralise peak efficiency (đ?‘–. đ?‘’. đ?‘šđ?‘Žđ?‘ đ?‘ đ?‘“đ?‘™đ?‘œđ?‘¤ đ?‘Žđ?‘Ą 1 = 0.5 Ă— (đ?‘?â„Žđ?‘œđ?‘˜đ?‘’ đ?‘šđ?‘Žđ?‘ đ?‘ đ?‘“đ?‘™đ?‘œđ?‘¤ + đ?‘ đ?‘˘đ?‘&#x;đ?‘”đ?‘’ đ?‘šđ?‘Žđ?‘ đ?‘ đ?‘“đ?‘™đ?‘œđ?‘¤))

6. Design point not too close to surge 7. High peak efficiency 8. Large map width (mass flow choke – mass flow surge) Choke is determined if the slope of the performance curve (change in mass flow with change in pressure) is less than a set value or the calculated efficiency is below 70%, depending on which situation occurs first. Surge is an unsteady phenomenon and is characterised by longer calculation time and large variance in mass flow and efficiency calculation. Surge is determined to have occurred if the number of iterations at a point during surge prediction exceeds a particular value or the average efficiency in the last 1000 iterations is less than 70%. The mapping program outputs a ‘sim_out.dat’ file that contains the calculated non-dimensionalised objective functions and ‘operatingPoints_sorted.dat’ file containing the compressor speed line performance data.

3.6. HTCondor After calling the CAD and meshing programs, the compressor mapping process starts by running TBLOCK to calculate the performance at the design pressure (point 1). It then continues to reduce the outlet pressure by a percentage set by the user and calculates performance at different outlet pressures until choke is predicted. The percentage reduction is defined as a function of the speed line slope. The program uses the previous solution from the last point to restart the next point in each case to speed up the process. When choke is predicted, surge

KTP Associates Conference 2014

HTCondor [10] is a specialized workload management system developed at the University of Wisconsin Madison for computer-intensive jobs. HTCondor provides a job queuing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. Users submit their serial or parallel jobs to HTCondor, HTCondor places them into a queue, chooses when and where to run the jobs based upon a policy, carefully monitors their progress, and ultimately informs the user upon completion [10]. The HTCondor pool set up for this project consist of 8 desktop computers running Linux CentOS 6.4

15


operating system. Each machine has 4GB of RAM and 4 cores. This means 32 instances of TBLOCK can run on the desktop cluster to speed up the calculations. The decision to use CAE tools without Graphical User Interface (GUI) was informed by the need to use cluster job management software such as HTCondor. This is because commercial applications with GUIs are more difficult to set up in a cluster of desktop computers running HTCondor software and often require multiple licences which can be very expensive [17]. A typical condor pool with six machines is show in figure 12. The central manager is identified by the presence of ‘negotiator’ and ‘collector’ daemons. Submit and execute features are identified by the presence of ‘schedd’ and ‘startd’ daemons respectively.

Fig. 13. Summary of Mapper program

Fig. 14. Detailed view of mapper algorithm flow Fig. 12. A typical condor pool with 6 machines. Source [10]

4. The Complete HTC System The HTC system described in this paper includes the components described in section 3 and a wrapper python script - ‘condorManager.py’ that bundles everything together. As described in section 3.5, the mapper program bundles the CAD, meshing and solver programs and calls them in same order. The input and output to the mapper program is illustrated in figure 13. An expanded view of the programs called by the mapper program is show in figure 14.

KTP Associates Conference 2014

The number of candidates per generation was approximated to 180 (23 X 8 = 184). A batch size of 45 candidates was chosen to run on the 8 desktop machines (32 processors) so that most of the processors are utilised all the time. The batch version of DEMO was then configured to optimise the CAD models in batches of 45. Hence 45 directories were created in the central manager machine within the condor pool for each candidate CAD model to be solved. The full system works as follows: 1. DEMO generates 45 CAD model input parameters. 2. ‘condorManager.py’ splits the 45 input parameters into 45 separate directories. 3. ‘condorManager.py’ instructs HTCondor to dispatch the jobs to the cluster. 4. ‘condorManager.py’ delivers to DEMO the completed batch and DEMO produces another batch of 45 candidates and the process continues. This process is illustrated in figure 15 where N is the batch size and 23 represents the number of parameters input for the CAD program (RC_CAD).

16


Fig. 16. Sample E-mail notification from ‘condorManager.py’

Fig. 15. Full Illustration of the HTC System

‘condorManager.py’ also calls the following sub programs – ‘checkJobsLog.py’, ‘mergeOutputFiles.py’ and ‘Accumulator’. ‘condorManager.py’ calls ‘checkJobsLog.py’ at defined intervals set by the user (e.g. every 5 seconds) which checks the job logs in each of 45 folders for completing status and notifies ‘condorManager.py’ when all jobs are finished. When all jobs are finished, ‘condorManager.py’ calls ‘mergeOutputFiles.py’ to merge the ‘sim_out.dat’ file (i.e. calculated objectives) of each CAD model in the different folders into a single file for DEMO to read in. ‘condorManager.py’ then calls ‘Accumulator’ which collates relevant input and output parameters for each valid CAD model in each set up folder (45 folders that make up a batch) into separate log files with a unique column (the first column – ‘Job ID’) identifying each candidate geometry. This accumulation takes place when a batch is completed so all the information required to repeat all the calculations are stored. Three log files of each valid CAD model are maintained as described below: 1. ‘paremetersLog.dat’ containing logs of optParam_in.dat 2. ‘objectivesLog.dat’ containing logs of ‘sim_out.dat’ 3. speedLineLog.dat’ containing logs of ‘operatingPoints_sorted.dat’

Fig. 17. Picture of computers in the condor pool

‘condorManager.py’ then sends an e-mail with statistics of valid designs along with an attachment of log files to relevant project team members. A sample e-mail notification is show in figure 16 and a picture of the 8 desktop machines in the condor pool is shown in figure 17.

5. Preliminary Results 5.1. TBLOCK Validation TBLOCK and ANSYS CFX results for the same

KTP Associates Conference 2014

17


geometry were compared for validation purposes. The results are comparable across different speed lines as shown in figure 18 where comparison is also made with the old design to be optimised. The discrepancy on the highest speed line may be due to higher numerical viscosity in ANSYS CFX which allows it to achieve a higher pressure ratio at high speeds and delay the inception of surge.

Fig. 19. Convergence plot of objective 1 to 6 (in order listed)

Fig. 18. Validation: CFX vs TBLOCK

Fig. 20. Convergence plot of objective 7 (Peak Efficiency)

5.2. Problem Set up The optimisation was set up without carrying out an initial Design of Experiment (DoE). This is because a comprehensive knowledge of how the geometric features will affect the 3-D flow physics is not known a priori and also to avoid initializing the optimisation process with an initial data set which could limit the optimisation process. Hence DEMO was set up to search the entire design space given by the parameter limits imposed in its configuration file.

5.3. Results Preliminary plots of average values of the objectives per generation after 17 generations (37 days of running) are illustrated in figures 19 to 21, except for figure 21 where the global objective function is calculated as an average of all the objective functions per generation. Figure 19 shows the progression of objectives 1 to 6 which reduces from one generation to the next as anticipated. Figures 20 and 21 shows the peak efficiency and map width respectively, increasing from one generation to the next as expected.

KTP Associates Conference 2014

Fig. 21. Convergence plot of map width and global objective function

Figures 22 to 24 show the progression of the Pareto front for a selection of objectives. The arrow indicates the direction of improvement of the objective function from one generation to the next. For example, figure 22 is a plot of map width (objective 8) vs peak efficiency (objective 7) and both should increase as indicated by the arrow.

18


5.4. Choice of Best Compromise Design(s)

Fig. 22. Map width (obj. 8) vs Peak Efficiency (obj. 7)

Choosing the best candidate(s) is not easy using these plots alone. Based on the number of objectives, in order to confidently choose a set of best candidate(s) visually, it will require combination of 3D plots of several generations. This is a very tedious approach, so the better option is to filter and sort the log files based on objectives of interest. In this case, objective 1 (‘dM_FOP_Target’) and objective 7 (‘Peak_Efficiency’) were used to filter the ‘objectivesLog.dat’ file and the other objectives were sorted accordingly. The final choice of design, further analysed and presented here also required checking the ‘speedLineLog.dat’ for the swallowing capacity of the promising candidates to ensure the operating speed line covers the current design operating speed line. In the end, candidate No. 1514 th in the 17 generation was chosen as the best compromise design as indicated by a circle in figures 21 to 23.

5.5. Analysis of Results

Fig. 23. Objective 1 vs Objective 7

Fig. 24. Objective 1 vs Objective 3

In figures 23 and 24, ‘dM_FOP_Target’ is an indication of how close the candidate is to the target mass flow and should be as close to zero as possible. In figure 24, ‘dM_FOP_PeakEff’ is an indication of how close the design point efficiency is to the speed line peak efficiency. In certain turbomachinery applications, it is preferred to have the peak efficiency on the design point in which case candidates with ‘dM_FOP_PeakEff’ = 0 are of interest only.

KTP Associates Conference 2014

A comparison of the calculated objective functions for the old and new design is illustrated in figure 25. The optimised design is better than the current design in 5 out of the 8 objectives. Objectives 1 and 2 are impossible to match exactly (i.e. achieving the exact design point mass flow) but the values calculated here give a mass flow within 0.4% of the target value which is considered a satisfactory deviation. The figure also shows a 2.4% increase in peak efficiency from the old design which is substantial in the current application. The meridional view of old and new designs is show in figure 26. The performance map of the old and new design is compared in figure 27. The new design shows a significant increase in swallowing capacity on speed lines 25,000 rpm, 30,000 rpm and 35,000 rpm. The new design also maintained the surge margin except at the highest speed line. At the design point of 11.4kg/s and pressure ratio of 2:1 on the 25,000 speed line, there is an increase in efficiency of about 2.5%. The new design clearly improves on the old design in terms of efficiency across all speed lines. Whilst the new design is a major aerodynamic improvement from the previous design, a mechanical analysis is planned to ensure stress calculations are within safe operating limits. Objective No. 1 2 3 4 5 6

Current Design

Improved Design

0 0 0.1163 0.1000 0.1000 0.0476

0.0038 0.0037 0.0792 0.0752 0.0927 0.0615

19


7* 8

0.9766 0.5349

1.0000 0.6852

Fig. 25. Comparison of objective functions (*Normalised data)

Fig. 26. Meridional view of old and new design (New design in broken lines, Old design in bold)

In this paper, an HTC system based on 8 desktop machines currently located at University of Lincoln School of Engineering has been presented. As this is a KTP project, the same system will be implemented at the company partner (Napier) so that idle desktop machines can be used for computationally intensive tasks such as CFD/CSM calculations. This will help reduce the additional cost of maintaining a dedicated computing cluster at Napier. Any additional cost that may arise as a result of employees leaving desktop machines on overnight could be reduced by configuring Condor to wake machines only when needed for calculations. Preliminary results presented here show an improvement in efficiency and map width compared to the current HP centrifugal compressor design. The use of DEMO and HTCondor with CAE tools shown here is an improvement in the design capability of Napier Turbochargers Ltd.

7. Possible Future work The CAD program would be improved to design splitter blades independent from the main blade and also to design vaned diffusers. The HTC system optimisation may also be expanded in the future to include CSM automatic optimisation to make sure new designs always meet stress constraints. Moreover, a concurrent sensitivity analysis to mimic errors during manufacturing of impellers and then finding out errors that do not have effect on performance of an impeller is being carried out. Results from the sensitivity analysis could lead to significant cost savings with regard to manufacturing wastage.

Fig. 27 Performance map comparison of old and new design (No. 1514)

6. Conclusion

KTP Associates Conference 2014

Acknowledgement

20


This project received financial support from the Knowledge Transfer Partnerships (KTP) program. KTP aims to help businesses to improve their competitiveness and productivity through the better use of knowledge, technology and skills that reside within the UK knowledge base. KTP is funded by the Technology Strategy Board (TSB) along with other government funded organisations. The author will also like to thank Dr Mike Riley at University of Lincoln for providing the modified DEMO optimisation code, helping to install Linux on the 8 computers and his assistance in designing the mapping program. Also thanks to Mr Frank Heyes and Dr Paul Roach at Napier Turbochargers Ltd for their guidance during the design of the CAD and mapping programs.

KTP Associates Conference 2014

21


References [1] Okhuahesogie O. F, Stewart J. Heyes F.J.G, Roach P.E, Design optimization of a two-stage turbocharger compressor impeller, KTP Associates Conference, Brighton, 2012.

[2] O.F. Okhuahesogie, J. Stewart, M.J.W. Riley, F. Heyes, P. Roach. A 1-D analytical code for the design and multiobjective optimisation of high-pressure compressors within two-stage turbochargers for marine application. IMechE Intl. Conference on Turbochargers and Turbocharging, London, May 13 – 14, 2014. [3] S. Burguburu, A. le Pape, Improved aerodynamic design of turbomachinery bladings by numerical optimization. Journal of Aerospace Science and Technology, November 2002

[4] C. Geuzaine and J.-F. Remacle. Gmsh: A threedimensional finite element mesh generator with built-in pre and post processing facilities. International Journal of Numerical Methods in Engineering 79(11), pp 1309 – 1311, 2009.

[5] http://www.salome-platform.org/. 04/04/14

Accessed

[6] J. D. Denton. The Calculation of ThreeDimensional Viscous Flow Through Multistage Turbomachines ASME J. Turbomach. 114(1), 18-26 (Jan 01, 1992)

core Platforms, Whittle Laboratory, Department of Engineering, University of Cambridge , Cambridge, UK

[9] R. Storn, K. Price. Differential Evolution – A simple and efficient adaptive scheme for global optimization over continuous spaces, International Computer Science Institute, 1947 Centre Street, Berkeley, CA 94704 [10] Michael Litzkow, Miron Livny, and Matt Mutka, "Condor A Hunter of Idle Workstations",Proceedings of the 8th International Conference of Distributed Computing Systems, pages 104-111, June, 1988

[11] Jesse Liberty, Siddhartha Rao, Bradley Jones, Sams Teach Yourself C++ in One Hour a Day, 6th edition, Sams Publishing, 800 East 96th Street, Indianapolis, Indiana 46240, 2009.

[12] Hunter, J. D. Matplotlib: A 2D graphics environment, IEEE Computer Society Journal of Computing in Science and Engineering, Vol. 9. Num. 3 pp90 – 95. 2007

[13] http://www.paraview.org/. Accessed 04/04/14

[7] http://www.openfoam.org/index.php. Accessed 04/04/14

[14] M.J.W. Riley, T. Peachey, D. Abramson, K.W. Jenkins. Multi-objective engineering shape optimization using differential evolution interfaced to the Nimrod/O tool. IOP. Conf. Series: Materials Science and Engineering 10 (2010) 012189.

[8] Tobias Brandvik, Graham Pullan, SBLOCK: A Framework for Efficient Stencil-Based PDE Solver on Multi-

[15] T. Robic, B. Filipic. DEMO: Differential Evolution for

KTP Associates Conference 2014

22


Multi-objective Optimization, Department of Intelligent Systems, Jozef Stefan Institute, Jamova 39, SI1000 Ljubljana, Slovenia. [16] T. Tusar, B. Filipic, Differential Evolution Versus Genetic Algorithms in Multi-objective Optimization, Department of Intelligent Systems, Jozef Stefan Institute, Jamova 39, SI-1000 Ljubljana, Slovenia.

KTP Associates Conference 2014

[17] Ian C. Smith, Experiences with Running MATLAB Applications on a Power-Saving Condor Pool, University of Liverpool Computing Services Department, Sept. 11, 2009

23


KTP Associates Conference 2014

24


DEVELOPMENT OF NEW GENERATION OF SMART PROSTHETIC LINERS A. Gallego, a,b A. Evans, a S. Zahedi, a J. McCarthy, a M. Tavakoli, b F. Salamat-Zadeh, b I. Jones, b a

Chas. A Blatchford & Sons Ltd. Unit D. Bond Close, Basingstoke (Hampshire) RG24 8PZ, UK. b TWI Ltd. Granta Park. Great Abington (Cambridge), CB21 6AL, UK Email: Ana.Gallego@blatchford.co.uk

Abstract There are approximately 5000 major limb amputations carried out every year in the UK. For lower extremity amputations 70% are due to arterial disease, and with upper limb 57% are caused by trauma. The socket is the component that ‘connects’ the artificial limb with the stump and therefore, it is crucial to have a good fit and be comfortable. Socket comfort is one of the main factors in the recovery of an amputee to a normal level of activity. Prosthetic liners are currently used by 30% of amputees to provide comfort around the stump, as they are used as a cushioning interface between the skin and the hard socket. Throughout the life of the amputee, the stump suffers fluctuations in volume and shape, compromising the comfort. This KTP1 Project was set up with the aim of commercialising a range of prosthetic liners to improve the comfort of the amputee by developing a novel manufacturing process then introducing unique features that can enhance the comfort for lower limb amputees. The initial part of the project consisted of a literature search on the state of the art for prosthetic liners, Medical Devices and FDA2 regulations induction, accompanied by an extensive patent search and classification to determine the final features of the end product. The KTP Project then undertook a material selection process to establish the main components of the products. The search for new materials and additives provided successful interactions with different suppliers and thereby expertise was gained in the field. The first samples were produced in small scale to determine the optimal processing route. Subsequently, a production size tool was manufactured to produce samples suitable to be fitted to amputees. These samples were distributed amongst patients within limb centres run by the Company. Clinicians fitted and reviewed the product providing data regarding durability, comfort and fitting. This contributed to the design and refinement of the current manufacturing line, which proved challenging at the beginning of the experimentation, but represented an invaluable learning curve that prevented setbacks thereafter. Mechanical tests were also carried out to assure the quality and create a benchmark for further improvement. The challenges encountered throughout the design of the manufacturing process were solved in collaboration with the Knowledge Base, being specialists in adhesion and polymer processing. The design of the manufacturing process resulted in the accumulation of valuable knowledge in the field of silicone rubbers and engineered knitted textiles, as well as discrete bonding and moulding techniques. The KTP Project also ensured the correct transition from Research and Development environment to the production line, by providing training, work instructions and quality control guidelines thus achieving continuity in manufacture.

1 2

KTP: Knowledge Transfer Partnership FDA: US Food and Drug Administration

KTP Associates Conference 2014

25


The designed prosthetic liner is due to be commercialised by the end of 2014 and sold around the world. The achieved base product will now serve as a platform to implement smart features to the prosthetic liner and to address the main issues of the patient, in particular, sweat management, which causes discomfort in a large number of users. This will be addressed by the implementation of laser perforations that facilitate the elimination of the sweat and therefore keep the stump3 drier. Keywords: Knowledge Transfer Partnerships (KTP), Prosthetic Liner, Silicone Liner, Silicone Elastomer, Circular Knitting, Adhesives, Medical Devices, Lower Limb.

3

Blatchford Patent US8308815 B2.

KTP Associates Conference 2014

26


Introduction There are approximately 5000 major limb amputations carried out every year in the UK, 70% of lower limb amputations result from arterial disease and 57% of upperlimb amputations are caused by trauma 0. Socket comfort is one of the main factors on which amputees depend to recover the level of activity they had prior to the amputation. Prosthetic liners are currently used by 30% of amputees. Liners provide comfort around the stump, as they serve as a cushioning interface between the skin of the stump and the rigid socket (Figure 1).

Figure 1. Liner worn over the stump and attached to the rigid socket.

The majority of prosthetic liners are off-the-shelf products, existing in a range of sizes and types depending on the amputation and mode of attachment to the prosthesis. The majority of liners currently available in the market are made of silicone, thermoplastic elastomer or polyurethane. The low durometer grades of these materials can provide comfort and good ‘memory’, allowing the product to recover its original shape without deforming for at least 6 months of daily use. Silicone in particular is the preferred material for this application as it presents stability, low surface tension and lack of toxicity 0. However, the main disadvantage of these products is that they possess low thermal coefficient (e.g. an existing silicone liner has a thermal conductivity of 0.205 W/m.K 0, compared to high thermal conductivity materials like Aluminium, with values around 170-200 W/m.K 0). Consequently stump skin temperatures may increase causing thermal discomfort thus reducing the amputee’s quality of life and loss of suspension, particularly in hot or humid environments 0. Silicone liners are currently manufactured by most of the main prosthetic components manufacturers in the world. Thus far, no prosthetic devices manufacturer has come up with a definite solution for temperature management of the stump, even though prosthetic liners have been in the market for more than two decades. Chas A Blatchford & Sons Ltd, who do not have their own range of elastomeric prosthetic liners, saw the need for innovation and engaged in a KTP Project to improve their competitiveness.

Partnership Background TM

Endolite , based in Basingstoke (United Kingdom) is the products division of Chas A Blatchford & Sons Ltd., which has been designing and manufacturing artificial limbs and prosthetic components since 1890, it is in fact, one of the oldest prosthetic manufacturers. Chas A Blatchford & Sons Ltd represented the Company Partner for this KTP Project.

KTP Associates Conference 2014

27


The KTP Knowledge Base was TWI Ltd based in Cambridge. TWI is one of the world’s foremost independent research and technology organisations, with expertise in engineering, materials and joining technologies.

Objectives The objective of this KTP was to develop a range of silicone liners to act as a platform for innovation to overcome sweat management challenge for amputees and introduce novel features to them.

Approach The KTP Project evolved as an iterative design process, which was based on a cyclic process of prototyping, testing, analysing, and refining a product or process, as illustrated in Figure 2. Identification of a need

Specification / Requirements

Process design

Pop model

Risk Assessment

Development

Patient Trials

Analysis

Figure 2. Product Design and productionalisation

The identification of a need and the requirements were provided by Chas A Blatchford and Sons Ltd, which were clearly scoped at the beginning of the KTP Project, with the aim to guide the development of a new range of silicone liners. An internal feasibility assessment was made by the team, which included an extensive patent search, carried out by the KTP Associate to ensure a commercially viable route and to describe the roadmap for the KTP Project. The materials selection was accomplished by firstly, carrying out a risk assessment, which implied the exclusive use of medical / prosthetic grades to minimise the risk. Prosthetic liners need to follow Medical Devices Regulations and therefore follow the Class I General Controls. Prosthetic Liners are classified Class I because they are ‘not intended to be used for supporting life, prevent impairment to human life or will not present a potential unreasonable risk of illness or injury’. 0. Class I devices should also follow General Controls specified in the regulation. This was also verified by subjecting the assembled materials for biocompatibility testing (Cytotoxicity, Irritation and Sensitisation), following ISO 10993-1:2009, as shown in Figure 3 to ensure that the product is safe to be used in contact with the skin. Within the material search there was a need to source a manufacturer to produce bespoke reinforcing fabric as part of the components of the liner, as Blatchford does not have textile manufacturing facilities in-house.

KTP Associates Conference 2014

28


Figure 3. Biocompatibility Test Matrix (The table is based on ISO 10993-1 Evaluation and testing, 2009 edition) .Tests to be carried out by surface devices in contact with the skin. 0

To develop the manufacturing process, a small prototype mould tool (Figure 4) was created to provide the ‘proof of concept’ benchmark and determine the most efficient, safe and repeatable processing route without the need of implementing very expensive and complex equipment.

Figure 4. Initial ‘pop model’ mould tool.

Subsequently, production size mould tools of different ranges were produced to study the scalability of the process and identify weak points throughout the different stages of the manufacturing process and refine them if necessary. Also, the creation of these real size samples allowed the KTP Project team to provide a number of trial patients with real samples to assess durability (fabric-silicone debonding, tear strength, washing cycles), comfort, ease of use and possible skin reactions. The results obtained during various tests, and the information from the patient trial feedback, were used to assess the product and its refinements of design in terms of profile change and surface improvement. Mechanical tests provided comparative data demonstrating elasticity in different areas and directions of the liner and an assessment of the integrity of the product.

Outcomes KTP Associate’s Education The KTP Associate spent part of her time collaborating with the senior consultant receiving education about the Prosthetics and Orthotics by attending numerous conferences, exhibitions and world-wide events. This was also complemented by a major investigation of the current state of the art by the attendance of the KTP Associate at major Prosthetics and Orthotics world-wide events such as AOPA4 and ISPO5 UK.

4 5

AOPA: The American Academy of Orthotists & Prosthetists ISPO: International Society for Prosthetics & Orthotics

KTP Associates Conference 2014

29


Also, the access to Consultant Prosthetists and Biomechanical Engineers provided a useful understanding of the mechanics of the body and medical aspects of the amputations.

Patent Review The patent search provided the team with a better understanding of the current state of the art in terms of technology and materials used up to this date and encouraged innovation.

Material Selection and Process development The construction of a pilot installation allowed the refinement of the initial process, which was initially carried out in small scale moulds. This highlighted different processing challenges that did not occur during the ‘PoP6’ model stage experiments. Most of these processing challenges encountered were linked to the materials, specifically to the properties of the latter in their liquid uncured form. Variables such as the temperature increase rate (as shown in Figure 5) and the study of the flow characteristics at the point of injection to overcome the difficulties of using high viscosity materials (Figure 6) were crucial factors in controlling the different moulding stages. By carrying out rheology studies of the different silicones used, it was possible to determine the optimum work time (pot life) at different temperatures, as shown in Figure 7 and Figure 8. These tests were carried out by the Knowledge Base, using a Bohlin Gemini rheometer with an ETC oven.

Figure 5. Moulding challenge: temperature differential between parts of the mould tool creates sink marks (top).

As the viscosity of the silicone increases drastically once the cross-linking phase starts, it is essential to design the moulding process to work within the pre-crosslinking stage to avoid flow problems and therefore air entrapment. As shown in the figures, working at room temperature (25 °C) allows a pot life of 40 minutes for Materials 2 and 3 and 120 minutes for Material 1. However, elevating the temperature to 40 °C triggers the cross-linking rate, reducing dramatically the working time window to just a few minutes. This allows a faster production turn-around but requires an exhaustive moulding design, avoiding air entrapment, as the material tends to form a skin on the mould surface.

6

PoP: Proof of Principle

KTP Associates Conference 2014

30


Figure 6. Moulding challenge: poor flow due to high viscosity. Air entrapment.

120

Time (min)

100 80 60 40 20 0 25

30

35

40

45

50

55

60

65

Temperature (°C)

Figure 7. Time required for G'/G" = 1 as a function of temperature (G' = Elastic modulus / G" = Viscous modulus). ▲Material 1. ■ Material 2. ◊ Material 3.

14

Time (min)

12 10 8 6 4 2 0 35

40

45

50

55

60

Temperature (°C)

Figure 8. Detailed view. Time required for G'/G" = 1 as a function of temperature (G' = Elastic modulus / G" = Viscous modulus). ▲Material 1. ■ Material 2. ◊ Material 3.

The advances in the development of the process resulted in improved equipment selected for production. A significant example was the automation system dispensing silicones that provided Chas A Blatchford & Sons Ltd with a very versatile piece of equipment to be potentially used in future products. A small scale production of one of the components started by using single use cartridges with a pneumatic gun (Figure 9). This method was suitable for development, but on a large scale was expensive and labour intensive. This dispensing method was replaced by an automated system that enables automatic degasing, and although it required an outlay of important early capital expenditure, it is a much more economical, clean, reliable and versatile solution.

KTP Associates Conference 2014

31


Figure 9. Pneumatic gun and cartridges

Prosthetic liners are commonly covered by a reinforcing fabric with specific stretch properties depending on the type of amputation. The KTP Associate sourced a suitable manufacturer from within the industry in different parts of the world that could provide the KTP Project with the bespoke solution and was logistically viable. This search proved to be really complex as the requirements specified for the manufacture of this type of fabric covers is not easily found in the UK, as a result of the textile manufacturing moving to the Far East in the recent years. As a result of the collaboration between the KTP Associate, the Company and the third-party manufacturer (supplier), Blatchford has a better understanding of textiles and has taken the opportunity to embark on new collaborative projects together, for other prosthetic applications.

Patient Trials The patient trials were carried over a period of approximately 2 years, in which Transtibial amputees, of both genres, and ages varying from 21 to 85 were fitted with prototypes of the liners. For both cushion7 and locking8 types of liner, shear at the proximal end was reported, requiring further investigation. It was also noted that physical delamination of the reinforcing fabric occurred in some cases. This resulted in the following improvements: reduction of the coefficient of friction; and improvements in the adhesion. The high coefficient of friction between the skin and the liner is due to the inherent tacky nature of soft silicone rubbers, causing irritation on the stump. The use of an additive and a modification of the initial processing method led to a dramatic reduction of the coefficient of friction (see Figure 10), supported by patient trials resulting in no more issues related to shear at the proximal end. Quantitative results were obtained by using Coefficient of Friction machine Chemsultants COF-1000, following ASTM D1894 guidelines at the Knowledge Base Facilities. The tests confirmed a reduction of the Static and Kinetic CoF9 of up to 76%.

7

Cushion: thick distal end of silicone for comfort and skin protection, for more sensitive stumps with bony prominences. Locking: rigid interface at the distal end to attach a pin that connects to a locking mechanism in the prosthetic socket. 9 CoF: Coefficient of Friction 8

KTP Associates Conference 2014

32


Uncoated Coated

Reduc;on of CoF 1.60

75% ReducAon

78% ReducAon

1.40 1.20 1.00 0.80 0.60 0.40 0.20 0.00

StaAc CoF

KineAc CoF

Figure 10. Reduced coefficient of friction compared to the original finish.

The improvement to the surface of the silicone liner could also be observed by microscopy, as shown in Figure 11, where a Keyence VHX-2000 series microscope was used to capture uncoated and coated surfaces.

Uncoated surface

Coated surface

Figure 11. Optical microscopy photograph showing the comparison between uncoated and coated surfaces.

The second set of issues reported right at the beginning of the project was mechanical failures due to delamination (Figure 12) of the reinforcing fabric. This issue was immediately addressed by a workshop organised between the Company and the Knowledge Base to analyse the weak points of the process and convert it into a more robust laminate. After numerous tests, it was determined that the ideal lamination should achieve full mechanical interlocking between the materials and the fibres of the fabric, to maximise the strength of adhesion of the adhesive used and thus achieving a very strong bond. All the lamination possible cases can be seen in Figure 13, where the ideal lamination characteristics have been represented in a diagram.

KTP Associates Conference 2014

33


Figure 12. Initial mechanical failures caused delamination between the fabric and the silicone elastomer, but it was rapidly addressed.

Figure 13,

a) shows an unfavourable lamination case, as due to the process, the silicone material strikes-through the fabric, leaving undesired marks or stains on the surface of the product. The worst case scenario is shown in b), where the material has not penetrated through the fibres, which indicates that the contact between the substrate and the silicone is very poor. This will produce a weak bond and therefore premature delamination. Lastly, case c) represents the ideal bond, where the silicone covers the interstices of the fabric without passing through it, so good adhesion and good appearance will be achieved.

a)

b)

c)

Figure 13. Lamination possible cases. a) Strike-through. b) poor adhesion. c) ideal adhesion

Implementation After mitigating the issues reported initially, patient trials proved to be successful, resulting in the development of a small scale production plant. The proposed production of silicone liners was written up and handed over to the Operations / Manufacturing team, who took the responsibility of producing the tools and handling equipment necessary for full scale production. Within the handover process, the creation of documentation (drawings, part number assignation, and technical file) had to be created. Meanwhile, production operatives were trained using the pilot plant, work instructions were written and audited, and quality standards were set up. This required creating a synergic environment around all the different disciplines involved to assure a successful transition period. For instance, the support of the implementation of the Marketing items such as pad printing or laser etching were resolved in a multidisciplinary manner using engineers and material scientists alike (Figure 14).

Figure 14. Specialised pigments for Pad Printing and Laser Etching.

Similarly, the separator material used as part of the packaging was optimised by a collaboration with the Marketing team that resulted in an interesting material selection process: it required a recyclable, inexpensive, easy to introduce and low storage volume, so different options were explored and implemented, such as the ones shown in Figure 15.

KTP Associates Conference 2014

34


Figure 15. Packaging selection

Progression to the Smart Liner As a consequence of an earlier Government funded project between the MoD10 and Chas A Blatchford & Sons Ltd. to solve the problem of sweat management in young active amputees 0, the KTP Associate has been collaborating with a laser material processing company and the Knowledge Base to commence the design of laser drilled prosthetic liners (Figure 16). Using laser drilling techniques, small holes throughout the thickness of the liners can be created to wick the perspiration out of the liner, achieving a drier stump. The results from the first units processed in specialised laser facilities were very encouraging. The next step in the implementation of the process is to design a bespoke automated apparatus in-house.

Figure 16. Laser drilled liner prototype

Conclusions The Knowledge Transfer Partnership helped to improve the Company Partner’s business by entering a new market within prosthetics, in which up to now, they have been absent. This way, the silicone TM liner complements their range of Endolite lower limb system products for amputees. This KTP project will create further avenues of innovation based upon the technology developed, such as fitting aids and even alternative methods of suspension. The new generation of smart prosthetic liners is expected to create at least £30K of revenue during the first year, a projected £0.88m 3 years after and ultimately £1.5m annually. This will also create new jobs and provide a new generation of products for lower limb amputees which will enhance their rehabilitation at an affordable price by the NHS.

10

MoD: Ministry of Defence.

KTP Associates Conference 2014

35


The KTP Associate’s personal development has been somewhat exponential, having had the chance to work in all areas of development and manufacture. The Associate has not only strengthened her existing knowledge in engineering, but also in other aspects such as the field of biomaterials and prosthetics. Practically, she has had first-hand interaction with patients and prosthetists. Being part of creating a solid team and improving interpersonal skills such as establishing good relationships with suppliers, have been of prime value throughout the project.

Future Work At the time of print, the KTP Project achieved the development, and commenced the manufacture of 2 different ranges of silicone liners for Transtibial amputees. The work left, until the end of the project will focus on developing 2 further ranges of liners for Transfemoral amputees. This will follow very much the same stages as adopted up to this point, but with the benefit of hindsight.

Acknowledgements The authors wish to thank Knowledge Transfer Partnerships for funding the collaborative project between Chas A Blatchford & Sons Ltd and TWI Ltd (KTP 8370).

References NHS Choices. ‘Amputation’. Accessed online; http://www.nhs.uk/conditions/amputation/Pages/Introduction.aspx on 14.03.2014. Modjarrad, K. and Ebnesajjad, S. (2013) Silicones. In: Colas, A., et al. ‘Handbook of Polymer Applications in Medicine and Medical Devices’. Oxford (UK). Elsevier. pp 131-144. Klute, G. K., Rowe, G. I., Mamishev, A. V. and Ledoux, W. R. (2007) ‘The thermal conductivity of prosthetic sockets and liners’. Prosthet. Orthot. Int. 2007 31: 292. Mazzolani, F., ‘Aluminium Alloy Structures, Second Edition’. 2nd Edition. Cambridge (UK). Chapman & Hall. pp. 662. FDA. ‘General Controls for Medical Devices’. Accessed online; http://www.fda.gov/MedicalDevices/DeviceRegulationandGuidance/Overview/GeneralandSpecialContr ols/ucm055910.htm on 18.03.2014. McCarthy, J., Ross, J., McDougal, A., Ward, A., Ritchie, L., Zahedi, S. (2012). ‘Sweat Management in Prosthetic sockets, Results of research and patient trials’. Paper presented at ISPO UK MS 40th Anniversary Meeting. Sheffield. 27th & 28th September 2012. Namsa. ‘Biocompatibility Matrix’. Accessed online http://www.namsa.com/Portals/0/Documents/biocompatibility-matrix.pdf on 17.03.2014. OandP. ‘Prosthetic Liners and Sleeves: Reaching New Levels of Comfort, Control, and Suspension’. Accessed online: http://www.oandp.com/articles/2013-03_07.asp on 14.03.2014.

KTP Associates Conference 2014

36


The design and implementation of a bespoke Enterprise Resource Planning system (ERP) for an acoustical engineering company

Betrand I. Ugorji, Bernadette-Marie Byrne University of Hertfordshire, Hatfield UK & Richard Collman ACE/ ACC Bourn, Cambridgeshire. Email: b.ugorji2@herts.ac.uk

This paper will describe the tasks completed so far as part of a Knowledge Transfer Partnership between the University of Hertfordshire and Acoustical Control Engineers (ACE), a ‘small and medium sized enterprise’ (SME) based in Cambridgeshire, UK. ACE’s 25 personnel design, manufacture and install noise and vibration control systems to solve a wide range of acoustic problems. The projects undertaken include acoustic enclosures for supermarket refrigeration plant and for generators used in many situations, together with other more diverse applications such as controlling noise in the workplace and even on a luxury boat. Before the current KTP project the company used some partially computerised systems consisting of spreadsheets to perform acoustic analyses, pricing and project management functions supplemented with a paper based system to ‘fill the gaps’. Enterprise Resource Planning (ERP) systems provide an integrated database for all parts of the organisation allowing decisions to be based on a complete understanding of the organisation’s information, avoiding the problems due to duplication of data and ensuring that the consequences of decisions in one part of the organisation are reflected in the planning and control systems of the rest of the organisation. ERP systems became popular from the 1990’s mainly in relatively large organisations due to the complexity and cost of these systems. This project is unusual in that rather than adapting an off-the-shelf ERP solution to ACE’s very specific and specialised requirements we are taking an ERP development approach in an SME whose legacy systems are made up of spreadsheet and paper based systems. For the software development, an Agile approach has been used. Agile involves software development methods based on iterative and incremental development. The initial attempt was to start developing the ERP from an Open Source ERP Source Code; however this effort was futile as a result of the bespoke nature of ACE’s business and product lines. Mapping ACE’s data model to the database which any existing ERP system could be adapted to, proved to be a very difficult problem. Therefore, developing the ERP from first principles was inevitable. Several of the ERP modules have been developed, user training has taken place and the core modules have been signed off. The project is due to complete in September 2014 and by this time we will have further information on how the ERP system has increased the competitiveness of the company, as well as experience of introducing an ERP into an SME. However, as would be expected, the work undertaken developing the system so far has had several significant effects on ACE and acted as a catalyst for change in various parts of ACE’s business. Key words: ERP, SME, Software development.

KTP Associates Conference 2014

37


Background An Overview of ERP Systems in Business ERP systems enable businesses to reduce cost, increase efficiency and output by increasing the speed and accuracy with which they achieve their strategic objectives. Computers have a young history with the first batch processing systems being used in the 1960s. These would usually be for a single system within a department; for example payroll systems within an organisation were usually the first processes to be computerised. Systems have now progressed to organisations having large centralised databases which can support multiple-users. Figure 1 depicts an ERP system model.

Figure 1: ERP - A centralised database system ERPs are business wide information systems that aim to integrate all the information from many business functions such as Materials Requirements Planning (MRP) which calculates materials requirements and production plans to complete known sales orders and to forecast sales orders. An ERP system provides an integrated database for all parts of the organisation allowing decisions to be based on a complete understanding of the organisation’s information, avoiding the problems due to duplication of data and ensuring that the consequences of decisions in one part of the organisation are reflected in the planning and control systems of the rest of the organisation. Prior to ERP systems, organisations’ data were typically spread across several separate information systems. For example, a firm could have separate systems for purchasing, order management, human resources, and accounting, each of which would maintain a separate data source. The aim of ERP would be to subsume these into a single seamless system. Research has shown that system fragmentation is the primary culprit for information delays and distortions along the supply chain (Repoussis et al., 2009, Upton et al., 1997).

Advantages of ERP Systems An ERP system seamlessly integrates and manages an organisation’s data from various departments, automates its processes and provides valuable information required for efficient running of the organisation’s day to day activities. This results in better supply chain management, competitive advantage, reduced time-to-market, and effective reaction to change in demand, lower operating cost, improved strategic planning, higher productivity, increased sales, increased margin, increased market share, easier reporting and improved customer service. ERP systems that interface with the internet play a vital role in globalisation. Firms would expect ERP systems to result in reduced costs, enhanced decision support, more accurate and up-to-date information, increased customer satisfaction, help to enable e-business, and the flexibility to change quickly (Su et al., 2010, Hakim et

KTP Associates Conference 2014

38


al., 2010, Kwahk etal, 2010). The next section will highlight the problems faced with investing in ERP systems.

Disadvantages of ERP Systems Introducing an ERP system is a major task (O’Leary, 2000, Gefen et al. 2000). However, if implemented successfully an ERP can aid the integration of the processes which then frees up time for the organisation to look at improving those processes (Slater 1999). Though early ERP systems focused on large enterprises, smaller enterprises increasingly use ERP systems (Christos 2012). This KTP project is unusual in that, rather than adapting an off-the-shelf ERP solution to ACE’s very specific and specialised requirements, a bespoke ERP is being developed for the company. The problem with buying an off-the-shelf product is that of compatibility with the company’s current business processes and practices. There is inevitably, some modification of an organisation’s working arrangements and systems to suit an off-the-shelf ERP system’s structure. Whilst this may not be excessive for organisations that buy and sell widgets or assemble them into standard products, highly specialised uses require far greater modification, both of the system and of the organisation’s working practices to achieve a workable compromise. Therefore, many companies have found it difficult to buy an ERP product and customise it to their present practices or change existing processes to fit in with an off-the-shelf ERP. There would also be considerable exposure to substantial ongoing maintenance costs and dependency upon the system supplier. In addition to the very significant implementation costs, there is a significant risk that the final system may be far from ideal.

Obviously, ERP systems do have some limitations and this has been widely documented. ERP’s can have a negative impact on the work practices and culture of an organisation (Allen et al., 2001, Bayraktar et al., 2009, Chudoba et al., 2005). Boiral (2007) claims that there is a need for extensive technical support prior to its actual use. Birnholtz et al (2007) and Schneberger (2004) claim that it takes an average of 8 months after the new system is installed to see any benefits.

Company History Acoustical Control Engineers (ACE) and Acoustical Control Consultants (ACC) are both ‘small and medium sized enterprises’ (SMEs) based in Cambridgeshire, UK. ACE provides engineered solutions to noise and vibration problems while ACC provides acoustic consultancy solutions. ACE and ACC’s 25 personnel analyse and solve a wide range of acoustic problems and/ or design, manufacture and install noise and vibration control solutions to these acoustic problems. The projects undertaken include acoustic enclosures for supermarket refrigeration plant and for generators used in many situations, together with other more diverse applications such as controlling noise in the workplace, defending a kennels in Court and even controlling noise on a luxury boat. Although distinct companies, ACE and ACC form a tightly integrated family run business and it is important that information can be shared efficiently between the two organisations particularly because many projects have both engineering and consultancy aspects to them. Figure 2 below shows some ACE products.

Figure 2: Weather Louvres, Cooling Tower Attenuation & Acoustic Refrigeration Plant Enclosure.

KTP Associates Conference 2014

39


At the moment the company’s legacy systems are made up of spreadsheet and paper based systems. These spreadsheets have been developed over many years for a range of business management purposes such as basic handling of enquiries, some sales forecasting and contract management, although there are major gaps such as no CRM system. Some spreadsheets have been developed for standard acoustic models whereas other acoustic analyses are developed on a case by case basis. Most pricing for engineering products is undertaken using spreadsheets but these are not efficiently linked together causing significant duplication of effort and the other problems associated with ‘islands of automation’. There is virtually no integration between pricing and manufacturing information with little computerised manufacturing control. A Sage accounts package is used for invoicing, credit control etc. Clearly there is significant room for improvements that can make a major difference to both efficiency and productivity. Being acoustic organisations working with decibels, the obvious name for an ERP system for both companies was deciBase.

Problems and Project Motivations Many years ago an attempt was made to develop an ERP system within ACE but with day to day business demands there was insufficient time available to do so and the project stalled. In order to reduce the time commitment the development was sub-contracted but it was taking nearly as long to specify the system in sufficient detail as to develop it in house, so this was also abandoned. Subsequently a second attempt was made to develop the system in house, but this was futile. ACE and ACC then considered purchasing an off-the-shelf ERP system to ‘bespoke’ where necessary to suit their requirements. One of the key requirements of ACE is that their ERP be very flexible and provide a good model of reality. One example of this is that ACE and ACC have ongoing relationships with clients and other people lasting several decades, during which time these people may work for several different organisations. During ACE’s pre-KTP research it became apparent that virtually all off-the-shelf ERP or CRM systems, could not properly model the relationship with individuals. Instead they created the equivalent of a contact card for an organisation’s employee, but when the person left that company a new contact card had to be created for them as an employee of the next organisation. With some systems it was possible to provide a link between the different contact cards to link the information, but this is typical of some of the adjustments that have to be made to off-the-shelf systems. Off-the-shelf ERP systems are designed to tackle some of ACE/ ACC requirements, but they are very generic products which have to be configured or customised for each application. The vast majority of ACE’s products are unique for every project, greatly complicating a standard ERP system’s implementation and restricting what it would be able to achieve in practice. However, whilst an ERP system represents a fairly large investment in itself, the implementation costs are likely to be at least as much as the initial system and frequently significantly more. Taking all of the above factors into account it became clear that a more appropriate solution would be for ACE to develop its own ERP system in house. A KTP was a perfect means to actualise the ERP development plan.

ERP System Development System Requirements The key requirement of ACE/ ACC is that their ERP provide a good model of reality. The ERP is to be able to undertake acoustic analyses and then seamlessly provide the resultant information to be used to select appropriate products, calculate prices, issue and follow up quotations then convert them to contracts where orders are obtained. This order/ contract information should then seamlessly flow through the manufacturing, delivery and installation system before triggering an invoice for the work. deciBase is a major development and could not realistically be completed in a single KTP with a single Associate due to the amount of time and work required for the design and implementation of a full ERP system. Given the above constraints, first, it was decided to undertake the development in

KTP Associates Conference 2014

40


stages, tackling the most critical areas first on a modular basis. Secondly, an incremental software development framework for a complex software project would be most appropriate for the project.

Methodology This section describes the approach that is undertaken towards the actualisation of the project requirements and presents the tools, the technologies, the design and architecture of deciBase. The section following this will present the project outcomes. Development Framework Given deciBase requirements, we adopted a software design and implementation approach which decouples all system components but at the same time allows for easy system integration. We are using Object Oriented Programming (OOP), Test Driven Development (TDD) approach and Domain Driven Design (DDD). The advantage of this is that we focus development efforts on the business requirements of a specific system module, carry out enough code tests before deploying the module to production server on completion. Then we test again, train users and quickly respond to changes.

Figure 3: deciBase - Agile Test Driven Development This development approach is Agile, involving use of the Model View Controller (MVC) design pattern. It has proved suitable for effective collaboration and has allowed us to incrementally develop only what is required. Our approach is also similar to Scrum because we can identify the Scrum Roles, Events and Artefacts but is different because a Scrum team requires a minimum of 3 developers not one. Tools and Technologies The ERP software development platform is Windows 7 Enterprise 64-bit Operating System, Intel® Core™ i5-3360M CPU @ 2.80GHz OS Processor and 7.88GB of free RAM. A modelling tool (Visual Paradigm UML) is being used in capturing system requirements, developing the business process models, drawing the Use Case diagrams, modelling the database with Entity Relationship Diagrams (ERD) and compiling the systems Data dictionary. A combination of W3C web standards and Java Enterprise Edition 8 technologies (JavaEE 8) including - HTML5, Javascripts, CSS3, XHTML, JSON, Web sockets and JPA2.1, JPQL, EJB3.1, Servlet3.0, JSR 352, and JSF2.2 respectively are used in developing deciBase. Java EE is being used in programming the business components of deciBase while XHTML, HTML5, Javascripts, JQuery and CSS3 are used in developing deciBase user interfaces. SQL is used through Java Persistence API (JPA) for querying the database which runs on Oracle JavaDB database 10.9. deciBase is deployed onto Glassfish-4.0 open source enterprise application server. deciBase source code is being edited and compiled using the NetBeans 8.0 Integrated Development Environment (IDE) and Maven 4.0.0.

KTP Associates Conference 2014

41


System Design and Architecture The system design identifies 5 component layers namely: the database; entity; service; presentation; and user interface. It ensures a clean division of responsibility for code maintainability and scalability. The last 4 layers are non-distributed and are deployed together into the same application server container. However, each layer is isolated by well-defined interfaces. They belong to the Model-ViewController of the MVC design pattern. The last 4 layers are distributed from the first layer, the database and are connected to the database via JDBC. Figure 5 shows the system design.

Figure 4: deciBase component layers Database Layer – The database is the centralised data store that consists of the relational data rd model of deciBase. It is normalised up to 3 normal form, therefore eliminating duplicates. Entity Layer – deciBase entity layer maps Plain Old Java Objects (POJOs) to database entities. It adopts a design-by-contract principle which ensures that security validations, integrity constraints, patterns and precisions are not violated before data is written to the database. The entity layer is designed to use precompiled database queries popularly known as prepared statements or stored procedures which improves the performance, security and efficiency of deciBase. Service Layer – The service layer handles deciBase’s business logic, provides transaction, security, etc. It manages entities in the entity layer and the database layer. Presentation Layer – The presentation layer handles deciBase’s presentation logic, presents data to the user interface, collects user inputs from the user interface, delegates’ user inputs to the entity and service layer, and controls user interface navigation. User Interface Layer – Is the layer through which users can interact with deciBase. Data validation may be performed across all layers. We are developing the business domain objects of deciBase strictly on JavaEE standard libraries. This ensures that deciBase stands the test of time as JavaEE standard libraries will always remain supported by Oracle. The key design glue to deciBase that has enabled loosely coupled, highly extensible system architecture and seamless component integration is OOP using Inheritance, Interfaces and Context and Dependency Injection (CDI). CDI allows the injection of CDI Beans from any of the model layers wherever they are required. We have also avoided code duplication. The code snippet in Figure 6 shows how CDI is used in wiring the business domain model components from various modules.

Figure 5: CDI, Inheritance and Interface incremental development enablers Figure 5: CDI, Inheritance and Interface incremental development enablers

KTP Associates Conference 2014

42


Figure 5: CDI, Inheritance and Interface incremental development enablers

Academic and Project Outcomes Project Outcomes - Presently, the Accounts, Activity, Project, Consultancy, Communications, Contact, Core, Security, Reporting, Legacy, Admin and Maintenance modules have been developed. User training has taken place. 7 of these modules are completely signed off while the rest are on incremental sign off usually after integration testing. The Quotation and Contract modules are currently being developed. deciBase is simple and user friendly. 2-Factor authentication, least privilege enforcement, access control and anti-password hacking are some of its security features. deciBase is more advanced than most off-the-shelf ERP systems. One of the major advantages of deciBase is flexibility and close modelling of reality. For instance, deciBase easily copes with the situation which is becoming more common where one person has multiple employments at the same time. deciBase accurately models this situation by treating people as entities in their own right and establishing employment links to organisations as appropriate. deciBase provides the seamless integration between ACE & ACC whilst maintaining their individual identities and brands. deciBase is yet to be commissioned. However, as would be expected, the work undertaken developing the system so far has had several significant effects on ACE and ACC, and acted as a catalyst for change in various parts of these companies’ businesses.

Academic Outcomes - There is an on-going PhD research on Intrusion Detection System in Cloud Environment being carried out by the project Associate at the University of Hertfordshire (UH). This KTP has produced a seminar, 2 MSc. students supervision support and 2 academic papers before this one. The seminar on Database and Application security was attended by final year th engineering students at UH on the 16 of April 2013. The first academic paper ‘Towards Secure Web th Service-based e-Businesses’ was presented by the associate at The 14 Edition of the Postgraduate Networking Conference (PGNet) 2013 at Liverpool John Moores University, UK, accessible via http://www.cms.livjm.ac.uk/pgnet2013/Proceedings/papers/1569775519.pdf. The second academic paper ‘Cloud Security: A Review of Recent Threats and Solution Models’ was presented by the associate at The International Conference on Cloud Security and Management th (ICCSM) 2013 at the Washington University, Seattle, USA on 17 October 2013. This paper is accessible through the ICCSM Conference proceedings and http://researchprofiles.herts.ac.uk/portal/files/2819772/Cloud_Security_A_ Review_of_Recent_Threats_and_Solution_Models.pdf

Conclusion In as much as the benefits of ERP systems cannot be overemphasised, recent studies have indicated that nearly 30 to 50 percent of global ERP deployments worldwide are problematic (Umble et al., 2002, Mabert et al., 2003). This is attributed to firms failing to manage their organisational needs at the same time as the technical implementation of their ERP systems (Scott et al, 2000, Koch, 2002). Research shows that it takes on average of 8 months after the new system is installed to see any benefits from an ERP system (Koch et al., 1999). An ERP is a long-term investment with benefits arising in the medium to long term not the short term (Alee, 1999). In the past 3 months the various modules for deciBase have come together and positive user testing has taken place. We are aware of the problems which have occurred in other ERP deployments. Many of these were due to the customisation of either the off-the-shelf software, or trying to change the existing processes to fit in with the software. As ACE has developed its own ERP from first principles this has not been a

KTP Associates Conference 2014

43


problem for us and we aim to see the successful deployment of all the modules of our ERP system, deciBase, before the project ends in late September 2014.

References REPOUSSIS, P. P., PARASKEVOPOULOS, D. C., ZOBOLAS, G., TARANTILIS, C. D. & IOANNOU, G. 2009. A web-based decision support system for waste lube oils collection and recycling. European Journal of Operational Research, 195, 676-700. UPTON, D. & MCAFEE, A. 1997. Vandelay Industries, Inc. HBS case study. HBS. SU, Y.-F. & YANG, C. 2010. Why are enterprise resource planning systems indispensable to supply chain management? European Journal of Operational Research, 203, 81-94. HAKIM, A. & HAKIM, H. 2010. A practical model on controlling the ERP implementation risks. Information Systems, 35, 204-214. KWAHK, K.-Y. & AHN, H. 2010. Moderating effects of localization differences on ERP use: A sociotechnical systems perspective. Computers in Human Behavior, 26, 186-198. O'LEARY, D. E. 2000. Enterprise resource planning Systems: Systems, Life Cycles, Electronic Commerce, And Risk, New York, Cambridge University Press. GEFEN, D. 2000. Lessons Learnt from the Successful Adoption of an ERP: The Central Role of Trust. In: ZANAKIS, S. D., G. & ZOPOUNIDIS, C. (eds.) Recent Developments and Applications in Decision Making. Kluwer Academic. SLATER, D. 1999. An ERP Package for You... and You... and You... and Even You, [Online]. Available: http://www.cio.com/archive/021599_erp.html [Accessed 1 June 2000]. Influences of Enterprise Resource Planning (ERP) systems implementation in Small and Medium Enterprises in Greece. Chatzichristos D. Christos. PhD Thesis, June 2012. University of Wales, Trinity St. David. ALLEN & KERN. Year. ERP Implementation: Stories of Power, Politics and Resistance. In: IFIP Working Group 8.2 Conference on Realigning Research and Practice in Information Systems Development: The Social and Organisational Perspective, July 27- 29 2001Boise, Idaho, USA. BAYRAKTAR, E., DEMIRBAG, M., KOH, S. C. L., TATOGLU, E. & ZAIM, H. 2009. A causal analysis of the impact of information systems and supply chain management practices on operational performance: Evidence from manufacturing SMEs in Turkey. International Journal of Production Economics, 122, 133-149. CHUDOBA, K. M., WYNN, E., LU, M. & WATSON-MANHEIM, M. B. 2005. How virtual are we? Measuring virtuality and understanding its impact in a global organization. Information Systems Journal, 15, 279-306. BOIRAL, O. 2007. Corporate greening through ISO 14001: A rational myth? Organization Science, 18, 127-146. BIRNHOLTZ, J. P., COHEN, M. D. & HOCH, S. V. 2007. Organizational character: On the regeneration of Camp Poplar Grove. Organization Science, 18, 315-332. SCHNEBERGER, S. L. 2004. STATER NV: E-servicing strategies. Journal of Information Technology, 19, 108-116. UMBLE, E. J. & UMBLE, M. M. 2002. Avoiding ERP Implementation Failure. Industrial Management, 44, 25. KTP Associates Conference 2014

44


MABERT, V. A., SONI, A. & VENKATARAMANAN, M. A. 2003. Enterprise resource planning: Managing the implementation process. European Journal of Operational Research, 146, 302-314. KOCH, H. 2002. Business-to-Business Electronic Commerce Exchanges: The Alliance Process. Journal of Electronic Commerce Research, 3, 67-76. KOCH, C., SLATER, D. & BAATZ, E. 1999. The ABCs of ERP [Online]. Available: http://www.cio.com/forums/erp/edit/122299_erp_content.html [Accessed June 2000]. ALLEE, V. 1999. New tools for new economy. Perspectives on Business and Global Change, World Business Academy, Vol. 13.

KTP Associates Conference 2014

45


KTP Associates Conference 2014

46


Scheduling and Rostering of Flexible Labour at Container Port Terminals using Metaheuristic Algorithms Ali Rais Shaghaghi 1*, Tom Corkhill2 and Abdellah Salhi1 1. Department of Mathematical Sciences, University of Essex, Colchester, United Kingdom * Corresponding author: araiss@essex.ac.uk 2. Operations Development, Hutchison Ports (UK), Felixstowe, United Kingdom

Abstract: The rise in container shipments causes higher demands on the container terminals, container logistics and management, as well as on physical equipment. This results in increased competition between container ports, especially between geographically close ones. The competiveness of a container port depends on a number of factors, in particular the time taken to complete discharge and loading operations, operational efficiency and the resulting contractual rates offered to customers. Therefore, a competitive edge may be gained if containers were to be quickly and efficiently processed within the port by matching resource levels to varying customer demands. Many of the operations at the container ports are carried out by a skilled labour force. Efficient deployment and allocation of it is therefore of great importance as the cost related to labour is relatively high. In this paper we will introduce an optimisation model combined with a metaheuristic algorithm to efficiently deploy flexible employees with multiple skills. The key objectives are to maximise utilisation, reduce cost and provide good working conditions for employees. Our method provides good quality schedules for employees whilst reducing associated costs of labour deployment. Keywords: workforce, labour, combinatorial optimisation, metaheuristics, scheduling, container port.

1 Introduction In the ever-increasing competitive market of container port operations, port operators are looking at improving service level whilst reducing costs. Balancing cost and performance is a major challenge in this line of business. Ensuring best service levels whilst reducing associated costs requires efficient management of resources. As many of the port operators are moving towards bringing more automation in to operations, labour is still one of the highest cost resources that port operators struggle to minimise. Container terminal operators serve a number of shipping lines by discharging and loading their periodically arriving container vessels according to an agreed timetable, [4]. Deviations of vessels’ travel times lead to stochastic arrivals in the port around the scheduled arrival time. This variability in service levels leads to the requirement of variable staffing at various stages, thus requiring a flexible labour schedule to match the service demand whilst ensuring high customer satisfaction and cost efficient operations. In some container ports the labour cost is the largest contributor to the cost of sales. Any reduction in the cost associated with labour deployment would have significant positive effect on a container port’s business profitability. Whilst cost reduction is always of great interest, it is crucial to ensure that any plan that might lead to cost reduction will not have a negative impact on service level. In order to match this variability the labour planning process

KTP Associates Conference 2014

47


is carried out in two stages. Firstly the labour demand is determined and secondly, based on the demand requirements, appropriate labour will be rostered and scheduled. Usually the staff scheduling and deployment is carried manually by labour planners, which often takes quite long time to be completed. In this paper we will present an optimisation framework that will deal with complex rostering of container port labour. The rostering of the labour follows complex rules and constraints related to operational, legal and work condition requirements. Our optimisation framework will produce solutions that will reduce the cost of labour deployment and will satisfy various constraints. We present the optimisation model formulation and will use a metaheuristic method to solve the problem to near optimality. The optimisation framework could act as a decision support system in order to guide the labour planning team to effectively utilise available labour. The rest of this paper is as follows: Section 2 presentsthe related literature; section 3 describes the general specifications of the labour scheduling model; section 4 presents the flexible labour scheduling model; section 5 presents experimental results;and Section 6 concludes the paper.

2 Related Work Similar scheduling problems are studied under the term of ‘staff scheduling and rostering’. In [1] it is described as the process of constructing work timetables for staff so that an organisation can satisfy the demand for its goods or services. Key elements related to staff scheduling and rostering are categorised in these groups, [1]: •

Demand modelling: how many staff are needed over a planning period;

Shift scheduling: what shifts are to be worked, together with an assignment of the number of employees to each shift, in order to meet demand;

Line of work construction: which are sometimes referred to as work schedules or roster lines, spanning the planning horizon, for each staff member;

Task assignment: It may be necessary to assign one or more tasks to be carried out during each shift. These tasks may require particular staff skills or levels of seniority and must therefore be associated with particular lines of work;

Staff assignment: the assignment of individual staff to the lines of work.

It is often difficult and computationally infeasible to solve the problems related to all these elements. Scheduling problems often have to be solved piecemeal while considering all related elements. The staff scheduling problem could also be categorised in regards to its application to various industries. For example industries such as call centres, health care systems (commonly known as nurse rostering), protection and emergency services, civil services and utilities, venue management, financial services, hospitality and tourism, retail and manufacturing require use of staff scheduling mechanisms to efficiently manage their workforce. One of the most active research fields is related to nurse rostering, [2], and [3]. Apart from the modelling aspect that relates to staff scheduling, an appropriate solving method should be picked to efficiently solve the scheduling problem. These algorithms are categorised below: Demand modelling: demand modelling is a complex process that requires detailed information related to various operational aspects of the business. Forecasting, modelling and statistical

KTP Associates Conference 2014

48


techniques are used to provide accurate and efficient demand model to match the business requirements. Artificial Intelligence (AI) approaches: techniques such as constraint programming and studies related to fuzzy systems often are the most prominent use cases of AI in the staff scheduling field. Most of such systems act as decision support systems in cases where a human interaction with the software is necessary. Constraint Programming (CP): CP provides a powerful tool for finding feasible solutions to rostering problems. This technique is particularly useful when the problem is highly constrained and/or when any feasible solution will suffice even if it is not optimal. Metaheuristics: form an important class of methods that solve hard, and usually, combinatorial/discrete optimisation problems. Typically, these methods are used to solve problems that cannot be solved by traditional approaches such as steepest descent or greedy local search. There is a growing interest in using such algorithms for various optimisation problems,[6], [7], and [8]. Mathematical programming approaches: In mathematical programming approaches, scheduling and rostering problems are formulated as linear integer programs, or general mathematical programs. Exact mathematical programming approaches solve scheduling problems to optimality when their size is manageable. They, however often, fail to solve instances with large search space which is typically the case in high dimensions.

3 Labour Optimisation Problem The Labour Scheduling Problem at container terminals consists of planning the shifts for the employees, and minimising costs associated with the workforce. While minimising the labour costs the optimisation model should optimise other desired targets such as requirements of demand, establishing acceptable working conditions and minimising risks associated to some uncertainty on the availability of the workforce (e.g. overtime availability over weekend). For this, various factors such as company, employees demand requirements and legislation has to be considered and it is necessary to achieve a balance between these factors. There are several key elements that directly affect the schedule and could form the basis for the objectives in the optimisation model, these include: i)

Overtime deployment cost: There are variable pay rates for overtime workers based on the type of the shift. These rates are typically higher for night and weekend shifts. Therefore in periods that the demand is not matched by contract employees, while considering other constraints, a good plan (lower cost) would reduce the allocation of overtime employees to shifts that attract higher pay rates.

ii)

Uncertainty in demand and supply information: Due to some operational aspects and human elements, it is frequently not possible to have accurate demand data. The longer the forecast period is the more uncertainty arises in the demand data. For example when planning the future shifts, a good plan should consider a balanced allocation of overtime and flexible employees.

iii)

Licences/competencies: Employees often hold a combination of licences/competencies they also have a primary role. Employees could be allocated to different jobs whilst following certain contractual rules and ensuring that they hold the appropriate license. This flexibility allows the labour planner to provide a balanced match to the demand. In

KTP Associates Conference 2014

49


addition to meeting demand, it is also desirable to provide frequent and balanced deployment exposure for employees to the various licenses they hold. This is because if a license is not used for certain amount of time it will become dormant, and the employee will require retraining before the license can be used again.

3.1 Scheduling Model The aim of labour planning is to supply sufficient labour to cover the labour resource demand. Most of this demand is covered via deploying basic contract employees. These are employees with a fixed shift pattern. For example a fixed contract employee knows that he/she has to be present at a work place every Monday morning to complete a 12-hour shift as crane driver. Any additional labour will be covered via two main sources that are: Flexible Terminal Operatives (FTOs) and staff volunteering for overtime work. FTOs are employees with flexible shift patterns. These employees do not have a fixed shift pattern and are deployed as required. The combination of employee information, demand data and other related operational, legal and business requirements are the building blocks of the scheduling model. Figure 1 is the overall view of the model with key inputs and outputs illustrated by the arrows. . Criterion:! •  Efficiency ! COMPANIES! •  Cost! •  Service level! •  Well-being!

UNION COLLECTIVE! AGREEMENT! •  Working arrangements! •  O/T, FTO deployment! !

Labour Scheduling Problem!

EMPLOYEES!

DEMAND! Requirements:! •  Duties, shifts! •  Skills Licenses! •  location!

•  Preferences ! •  Wishes! •  Well-being!

SCHEDULING! •  Assignment Duties, shifts! •  Rest periods!

Figure 1: Labour Scheduling Problem

4 Optimised Flexible Staff Deployment As described in previous section, in order to match the demand requirements in cases where contractual labour is not sufficient to match this demand, FTO and overtime labour is deployed. As a result of relatively high pay rate of deploying overtime labour especially during night and weekend shifts, efficient deployment of FTO labour could potentially reduce total labour deployment costs. Potentially an optimised FTO deployment could lead to these benefits: •

Reduction in overtime pay as result of reducing rate B (night, weekend) payments.

KTP Associates Conference 2014

50


Better planning in order to balance the deployment of floaters in planning period

Analysis of various deployment scenarios by altering objectives and constraints

Introducing some additional preferences to improve working conditions for flexible workforce

4.1 Problem Description and Scope The process of matching workforce demand could be described in two sections. Firstly it is allocating fixed contract employees and secondly matching any additional requirement with employees on flexible contracts and those willing to do overtime. In this paper we assume that fixed contract employees have already been allocated to cover part of the labour demand. The resource demand requirement should be available for all the shifts that require workforce assignment. For example if for a specific shift there is a requirement for 24 crane drivers and the fixed contract workforce can provide 20 crane drivers, the excess resource demand requirement for that shift would be 4 drivers. Allocation of flexible workforce and overtime will cover the excess of 4 drivers for that particular shift. In this paper the optimisation will focus on deployment of FTOs. The deployment cost will be controlled via calculating the cost of overtime employees for each shift with an appropriate shift rate. Basically, these are the shifts that are not covered by flexible staff.

4.2 Planning Horizon There is a trade-off between planning for longer and shorter time horizons, there is significant advantage to being able to plan for several days, even weeks in advance, however there are some uncertainties in the elements defining the demand forecast. The general idea is that if we plan for longer time horizons we are able to utilise the workforce more efficiently. However, there are two down sides with this, firstly planning for longer periods is a computationally intensive task which might take hours to be completed by even fast computers, and secondly the demand forecast information becomes less reliable for longer periods. Therefore in order to balance these criteria a careful selection of planning horizon is important.

4.3 Flexible Labour Optimisation Criteria An optimal allocation process requires detailed definition of goals, targets and also constraints. The allocation constraints could be categorised into two groups; hard and soft constraints. Hard constraints are sets of criteria that have to be satisfied in order to have a feasible plan. Violation of any hard constraints would result in an unfeasible plan. Soft constraints are sets of constraints that could be violated, they act as set of preferences but violating these constraints will not cause any problem to the feasibility of the schedule solution. In cases where there are several feasible solutions, i.e. all hard constraints are satisfied, then the aim is to find an optimal solution with regards to maximising or minimising some objectives. For example one of the main objectives is to minimise the associated cost of each deployment whilst at the same time balancing the availability of flexible workforce in times where there is less possibility to deploy

KTP Associates Conference 2014

51


overtime workers. below in Table 1.

11

Examples of some of the constraints and targets that can be considered are listed

Hard Constraints

Soft Constraints

Objectives (Max and Min)

Employee availability (Holiday, sickness)

Employee working shifts preference

Minimising cost of deployment

Collective Agreement and regulatory

Maintaining staff skill exposure

Maximising assigning of flexible workers to times where overtime availability is scarce

Shift Pattern Matching resource demand (skills and quantity) Table1: Flexible Labour Optimisation Criteria

4.4 Constraints Hard constraints related to flexible staff scheduling are as follows: •

HC-1: Deployment in each shift, skill license should not exceed the demand.

HC-2: For each day, an employee may start at most one shift.

HC-3: Maximum number of working days is 7 x 12 hour shift over a reference period of two weeks.

HC-4: One complete weekend off in any rolling 4 week period.

HC-5: Maximum 4 night shifts in 2 week reference period.

HC-6: Maximum 4 consecutive working days (12 hour shifts).

4.5 Optimisation Algorithm In order to solve the optimisation problem we need to utilise an optimisation algorithm to solve the scheduling problem. Because of the size of this problem, mathematical optimisation algorithms e.g. mixed integer programming are not always able to produce solutions in a reasonable time. However metaheuristic algorithms tend to produce near optimal results in a relativity short period of time. In this paper we have modelled the optimisation problem into an open source optimisation framework named Optaplanner (see [5]). This framework enabled the scheduling model to be programmed into the framework. Appropriate algorithms could also be plugged in to solve the problem. In this paper we utilise a Tabu search metaheuristic to solve the scheduling problem. A Tabu search method works on the basis of local search techniques that search the neighbourhood of a candidate solution. More specifically Tabu search uses a memory structure in order to avoid previously visited solutions with inferior objective results.

11

The availability of overtime workers is reduced during weekend and holiday seasons.

KTP Associates Conference 2014

52


5 Experimental Setup and Results In this section we present the experimental results of the optimisation software for the purpose of deployment of flexible labour. The challenges involved in the evaluation of the optimisation framework relates to these three categories: Creating feasible staff schedules: this means that all of the hard constraints have to be fully satisfied. Handling problems with large numbers of employees: due to the combinatorial nature of this type of application, an explosion in the size of the problem is often observed, making the problem potentially truly intractable. Optimising key objectives: the optimisation process must achieve near optimality on all relevant objectives.

5.1 Experimental setup We will evaluate the effectiveness of the optimisation framework for two scenarios. Scenario 1 In this scenario demand is equal for each day of the two-week period. Shifts only represent day or night. Also all shifts have the same type of skill requirement. The objective is to only maximise the number of day shifts covered. This means our preference is to cover more day shifts rather than night shifts.

Figure 2: Scenario 1 demand requirement. D = Day. N = Night The results presented in Figure 2 indicate that the optimisation software is able to achieve the objective successfully while satisfying all related constraints. For example each employee works no more than 7 days in a two week time horizon and no more than 4 working days consecutively.

Figure 3:Scenario 1 results Scenario 2 In this scenario we have created a data set of employees with mixed set of skills. The demand data is also created for each shift. Table 2 below details the sample data that our test is based on.

KTP Associates Conference 2014

53


Number of employees

90

Number of skills/licences

4

Number of total shifts for two weeks period

360

Table 2: Scenario 2 input data

Shifts

Fri

Sat

Sun

Mon

Tue

Wed

Thu

RTG Driver Day

18

10

15

14

10

14

8

RTG Driver Night

7

8

15

14

12

13

10

IMV Driver Day

15

IMV Driver Night 4

Crane Coordinator Day Crane Coordinator Night

5

2

Berth Operator Day

4

Table 3: Scenario 2 demand requirement

This experimental problem is a relatively large problem with 360 decision variables, which each of them can hold 90 different values. This creates a problem with 90^360 permutations. It is clear that all permutations (candidate solutions) of the problem create a very large search space. The key objective in this scenario is to maximise the workload of Flexible staff for night and weekend shifts. This is due to the fact that any weekend or night shift that is not covered by the Flexible staff has to be covered by overtime employees. The weekend and night shift overtime rate is 50% more than any normal overtime rate, therefor the cost associated with the total employee roster will be higher if overtime staffs are deployed for night and weekend shifts. The optimisation results show that the software is able to produce feasible rosters, schedules that satisfy major constraints, and at the same time reduce the cost associated with labour deployment. Figure 4 represents the portion of the shifts that are not covered by employees, the four empty areas relate to weekends. As can be seen the number of night shifts that are not covered is zero and also there are no uncovered shift for weekend period. This emphasises the effectiveness of the optimisation framework to maximise cover during weekend and night shifts. A solution with these characteristics reduces cost requirements for night and weekend overtime covers which is known to be more expensive than normal shifts.

KTP Associates Conference 2014

54


Figure 4: Scenario 2 ‘not resourced’ shifts

Conclusion The increasing competition arising in container port business requires efficient delivery of operations at all levels. Efficient use of labour resources is key to successful delivery of operations and increasing the quality of customer service. In this paper we presented an optimisation framework to schedule and roster flexible labour in order to maximise efficiency and reduce associated costs. The optimisation model captures all the operational, legal and work related constraints that must be satisfied in order to have an acceptable work schedule whilst reducing costs related to deployment. The unique structure of these types of scheduling problems conveys the need to use metaheuristic optimisation algorithms to solve these scheduling problems in a reasonable time and provide near optimal solutions. Exact methods are often of no use. Further extension of this work includes a more extensive model to include all employees in all sections. The key challenge posed by the extension is the increase in the size of the problem. It may well be necessary to adopt a decomposition approach in order to handle large instances of the problem.

KTP Associates Conference 2014

55


References [1] Ernst, Andreas T., Houyuan Jiang, Mohan Krishnamoorthy, and David Sier. "Staff scheduling and rostering: A review of applications, methods and models." European journal of operational research 153, no. 1 (2004): 3-27. [2] H. Wolfe, J. Young, Staffing the nursing unit, Part II: The multiple assignment technique, Nursing Research (1965) 299–303. [3] H. Wolfe, J. Young, Staffing the nursing unit, Part I: Controlled variable staffing, Nursing Research (1965) 237–243. And V. Trivedi, Optimum allocation of float nurses using head nurses perspectives, Ph.D. Thesis, University of Michigan, 1974. [4] Steenken, Dirk, Stefan Voß, and Robert Stahlbock. "Container terminal operation and operations research-a classification and literature review." OR spectrum 26.1 (2004): 3-49. [5] http://www.optaplanner.org/ [6] Rais Shaghaghi, Ali, Tim Glover, Michael Kampouridis, and Edward Tsang. "Guided Local Search for Optimal GPON/FTTP Network Design." Computer Networks & Communications (NetCom), pp. 255-263. Springer New York, 2013. [7] Mashwani, W.K. and Salhi A., “A Decomposition-based Hybrid Multi-objective Evolutionary Algorithm with Dynamic Resource Allocation”, Applied Soft Computing Vol.12, pp. 2765-2780, 2012. [8] A.Salhi and A.Vazquez-Rodriguez, “Tailoring Hyper-Heuristics to Specific Instances of an Optimisation Problem Using Affinity and Competence Functions”, Journal of Memetic Computing, 2013, DOI: 10.1007/s12293-013-0121-07

KTP Associates Conference 2014

56


This branding ensured that they would be guaranteed regular foot fall, a higher standard of donated stock and their respective charities would receive greater awareness. As a small local charity, St Wilfrid’s were encouraged by the project team to begin developing their brand on the high street. According to Ailawadi and Keller (2004, p331) ‘the highly competitive nature of retail can make branding especially important when it comes to influencing customer perceptions and drive store choice and loyalty.’ This notion was key when it came to designing the St Wilfrid’s Retail brand. In order to stand out from the other charity retailers on the high street, our offers had to be made clear, identifiable and replicable in our shops, despite the difficult nature of merchandising donated goods. This paper examines the process the organisation took to collect research on retail branding, to assign branding to merchandise in their shops, in advertising and online.

Branding your Local Charity Shop: A Case Study of St Wilfrid’s Hospice Shops Innovative Branding Strategy Implementation Veronica Malley1,2, Harvey Ells2, Chris Dutton2,

1

St Wilfrid’s Hospice, 1 Broadwater Way, Eastbourne, East Sussex, BN22 9PZ. 2

University of Brighton, School of Sport and Service Management, Hillbrow Denton Road, Eastbourne BN20 7SR

Organisational capabilities such as enforcing a positive attitude and staff loyalty are sources of positional advantage and lead to superior performance (Barney 1991; Day and Wensley 1988). The scope of this paper extends out of the traditional notions that make up much of branding literature to discuss how management plays a large role in making brands more recognisable on the high street as well as online and in advertising. It is important to approach the way central management conveys the brand messages to their employees carefully. “On a more abstract, organisational level, the brand promotes the identity and underlying values of a unique culture by communicating the messages, products, and services created by that culture” (Holland, 2006 p.5). After attending a two day intensive course at Central Saint Martins on Exploring Brand Opportunities (Hestad 2013), it became clear that St Wilfrid’s needed to holistically distribute new ideas about branding to their shop managers in order to drive the message to customers.

Introduction According to a Third Sector report ”researchers have found that there were neglected opportunities and that charity shops have a potential for showcasing, promoting and hosting services that are often underexploited” (Pudelek, 2013). St Wilfrid’s Hospice in Eastbourne implemented a strategy to begin developing a clear brand position which would ensure a distinctive and identifiable offer (Torres-Baumgarten and Yucetepe 2007). “Charity retail practice is an accepted nonprofit retailing format that is often set up by filling vacant premises on the high streets of small towns and the suburbs of large cities in the United Kingdom (Liu et al., 2013, Horne, 2000; Parsons, 2004a, 2004b). Branding has been called on by organisations both in the charity retail sector and in mainstream retail to position their outlets within the market, giving shops the potential advantage to overcome significant challenges that face management today (Berinstain and Zorilla 2011; Burt and Carralero-Encinas 2000; Burt and Davies 2010). When the University of Brighton began their project with St Wilfrid’s Hospice in Eastbourne in 2012, as a part of a two year Knowledge Transfer Partnership, it was clear that other charity shops such as Oxfam and the British Heart Foundation had a strong lead with consumers due in large part to their branding strategies.

KTP Associates Conference 2014

Literature consistently refers to brands as a means of adding value (Ailawadi and Keller 2004; de Chernatony and Dall’Olmo Riley 1998b; Mitchell et al. 2012) which this paper investigates as St Wilfrid’s brand had already been built up over 20 years through the work of the hospice. Their logo, which consists of a dove bent into the shape of a hand over the clear text “St Wilfrid’s Hospice,” is significantly powerful in the local community. In 2013, a local website “The Best of Eastbourne” surveyed the town to find that St

57


Wilfrid’s was their favourite charity. The teal colour used in the logo is also very closely identified with the hospice work from their fundraising events to advertisements in newspapers, and flyers around town; it was especially challenging to transform this into a brand which exhibits the ‘fashion forward, ethical consuming, sustainable wardrobe’ values that we wanted to attribute to our shop brand. Brodie (2009) suggests that greater attention towards integrating the role of the brand in value-adding processes improves customer experiences and performance (Darden and Babin 1994; de Chernatony and Dall’Olmo Riley 1998a, 1998b, 1999; Doyle 1990; Jara and Cliquet 2012). What the project team tried especially hard to do was to demonstrate to the organisation how adding value to a brand or a service such as bringing in a bought-in goods line or opening up an ecommerce shop on amazon and eBay are not merely ways of reacting to competitor tactics. Bridson et al (2013) argue that ‘these activities are viewed as an investment in the long-term future of the brand and are directed by a desire for the brand to communicate a differentiated total store experience’ (p. 1-20). This paper will examine how the project team went about conveying these messages as well as the tools and preparation that went into planning a new marketing strategy for the St Wilfrid’s Retail Company.

that ‘design and branding work are respectful and effective partners. And, in both cases, the design is accessible, graphic, straightforward, memorable - a complement to the overall brand message of the organisation’ (p.3). For St Wilfrid’s, the logo has to express two entirely different messages and a central goal for the project was developing a way to do this without misrepresenting the hospice or the retail brand. De Chernatony (2011) recalls that Virgin provides a good example of a brand being stretched (p. 54). “From its (Virgin) origin in the record business it has stretched into the airline, wine retailing and financial services sectors, amongst others” (De Chernatony et al. 2011 p. 54). We wanted customers to associate our logo with a new kind of charity shop that is fashion forward, fun to shop in with great customer service and excellent value for money. Warren Alexander, CEO of the Charity Retail Association believes that “it is important for charity shops to demonstrate the positive impact they have beyond raising money for their parent charities… although shops should not forget their core purpose” (Pudelek, 2013). It was important however that we respect the fund-raising elements that the logo stood for such as ‘transforming end of life care locally’ which we introduced in the shops as vinyl graphics on the wall to show that St Wilfrid’s retail only existed for one reason, to raise funds for the hospice. In order to do this, we had to change our branding strategy in the shops to begin to deliver a new incentive for a completely new and delighted customer base. As the project began to take action we also issued caution as we knew that there came a cost to stretching a brand. de Chernotony (2011) warns that “successful brand extensions may dilute the brand values of the core product, or in our case, the hospice message (p. 55).

Extending the Brand “Retailers must determine how their brand will be used to reflect the consumer’s current view of themselves and the manner in which it can enhance their aspirational self-identity (Bridson et al, 2013, p. 1-20). Symbolic features are intangible, emotional values of a brand that can be used to tell the story the organisation wants the consumer to understand (de Chernatony and Dall’Olmo Riley (1997b), de Chernatony and Segal-Horn (2001), Keller (2003) and McEnally and de Chernatony (1999)). The logo, the images associated with the brand, the text, the font, the colour, every symbol relates messages about the brand that need to be carefully constructed and adjusted for the key customer one would like to position its store to advertise to. Symbolism ensures that the brand can reach out to customers and connect on a personal level to promise customers that the merchandise inside the shop will be best suited to their lifestyle choices. In Holland’s (2006) ‘Branding for Non-profits’ he suggests

KTP Associates Conference 2014

Sustainable Fashion, Sustainable Work “The problems created by the fashion industry in the West are quickly being matched and multiplied in other parts of the world. Buying so much clothing, and treating it as if it is disposable, is putting a huge added weight on the environment and is simply unsustainable” (Cline, 2013, p.3). When calculating who the St Wilfrid’s target customer base was, thorough research methodologies were built around surveys, observation and informal interviews and interactions with customers, managers, board trustees, members of the public, volunteers and

58


directors of the charity. Who our actual customer was and where we wanted to be in order to compete with other charities on the high street were nowhere near where we needed to be. Other charity shops are providing 19 per cent of total UK charity incomes (Charity Retail Association, 2013). In spring 2012, 35% of survey respondents were between the ages of 61-79 and 28% were between the ages of 46 and 60. Of the 223 surveys returned in the first round, 57% replied that they spent between £1-£10 and 31% responded to having spent £11-£20 on a visit to our shops. These results were not baffling to us based on how the shop looked, the brand positioning and the management. Adding value to St Wilfrid’s brand was going to need more than the simple logo update although shops such as JCPenny’s in the US proved how difficult just that could be. In 2013, Forbes magazine reported that Sales at the retail institution fell some 25% and their stock fell some 50%. The publication argues that their first mistake was that their new CEO, Ron Johnson, ‘spent lavishly trying to remake the brand’ without looking at growing the business internally by changing management (Forbes, 2013). The successes that have come out of the government funded Knowledge Transfer Partnerships, have in large part to do with the fact that Associates stay and work within the business for a minimum of 6 months in order to drive innovation and change. It is essential that research and a strong understanding of what the brand is from all angles is done to confer what the brand can achieve. Neff and Moss (2011) believe that “in an ideal world the innovative staff and digital savants would work long enough to really understand your organisation and begin the opportunity to research, pilot, and eventually manage their own programs to drive the organisation’s mission forward for years to come” (p. 131). Sustainable and ethical consumption is now something that customers take seriously as well as implying how contemporary consumer societies are bidding to create more sustainable lifestyles (e.g., Barnett et al., 2011; Lewis and Potter, 2011). With 30% of our shoppers responding that they come into St Wilfrid’s to further support the hospice and 29% stating that they come in to find a bargain, the ethical consumer trend began to look like a realistic way of appealing to our customers and standing out from other charity shops that continued to use their charities branding to sell just donated goods and their parent charity message. “The fashion industry has largely split into ultra high-end and low-

KTP Associates Conference 2014

end clothing; consumers have been divided into warring camps of deal hunters and prestige shoppers, with little in between. And with “good” clothes now outrageously priced, shopping cheap is more of a non-choice than we recognise” (Cline, 2013, p.6). Included in our branding had to be the message that one could find good quality, cheap clothing, which according to Cline in her work “Overdressed: The Shockingly High Cost of Cheap Fashion”, is an almost impossible achievement in the current fashion economy. Still, our donors very often give almost never worn brand name pieces and donate highly valued vintage items in order to support the hospice. This consideration also helped the project make a decision about the branding as it was imperative to the offer that the quality of donations continued to be suitable. Some limitations that were questioned by the board of trustees at St Wilfrid’s was how customers would react to such a radical change in our marketing. Who would customers compare our shops with if we were to begin branding ourselves more like a retail outlet than a charity outlet?

Branding Second-hand Cultures Gregson and Crewe (2003) explored the second-hand cultures of the charity shop specifically looking at the aspects of ‘shoptalk’, shopping behaviours and wider material cultures of used ‘things’ in relation to alternative spaces (p. 90). The culture that the project stepped into embodied the stereotypical charity shop identity that can be associated with not only our brand but with others such as Oxfam, British Heart Foundation, British Red Cross, Barnardos, Chestnut Tree, Cancer Research, Age UK, Debra, Scope, Sussex Beacon, PDSA etc. (Charity Retail Association, 2014). ‘Shop-talk’ between managers and volunteers, allows for two things both negative and positive to emerge. For one, customers can relate to volunteers and feel freer conversing with them about the charity, the community and local businesses in the area, which helps the store atmosphere become pleasant and incites shoppers to support. However, when managers are not incentivised themselves, this trickles down to volunteers and customers with a great potential to hurt the brand image on the high street. Neff and Moss (2011) ask whether “you reached out to the knowledgeable and digitally savvy millennial staff and challenged them to move the organisation’s mission forward (using social media)?” (p. 126) Johnston (2008 et al) have

59


conducted extensive research into ways ‘alternative retail spaces and places through in-store discourses, perfomativities and environments’ can support a ‘shop for change’ initiative that is corporate-enabled and ‘consumer-citizen’ empowered (Goodman and Bryan 2013, p.9-15). Neff and Moss (2011) argue that ‘to really drive an innovative project, you need to inspire your innovative employees. Employing creative and dedicated employees who will donate their energy and effort above and beyond their regular 9 to 5 hours is key’ (p. 150).

public” (Booth, 1890; Broadbridge & Parsons, 2003a; Horne & Maddrell, 2002 from Liu et al. 2013 p. 2). These notions may seem to actually complicate branding for charities as the ‘ethical appeals’ that the Hospice may want to emerge from branding with ‘added-value’ may seem as though funding is going into retail as opposed to the actual function of the charity. According to Goodman and Bryant (2003), “Oxfam produces moralised discourses that infuse the practice of reselling items and the sale of fair trade products” (p. 13). The St Wilfrid’s project team wanted to introduce branding that exhibits this with a more fashionable edge that aims to target middle-aged women between the ages of 30-50 due in large part to the successful sales of our bought in goods range and our vintage clothing. Blanckaert and Hernu (2000) believe that “buying and wearing secondhand clothes represents a more responsible lifestyle vintage has unexpectedly become linked to the “slow fashion” movement” (p. 292). We wanted to replicate that offer in our shop windows. The sub-brand we were yet to develop both in shops and online using social media to attract a new younger volunteer, would be able to sustain the brand message in the shop. This was an area we struggled with tremendously for no matter how the branding looked and that the message we set across was correct, the promise was not met in store as our volunteers were still older women over the age of 60. Evolving organisational and managerial traits of the charity shop business: shop ‘branding’ (Girod, 2005), the performance of volunteering (Parsons, 2006), the changing and ‘professionalisation’ of the workforce (Broadbridge and Parsons, 2003, 2004; Parsons 2004b; Parsons and Broadbridge, 2004) and the business climate (Horne, 1998; Horne and Broadbridge, 1995; Parsons, 2002, 2004a) became our main focus once we realised the flaws in our first attempt at remarketing the shop.

It was important for St Wilfrid’s to be concerned with how the national charity retailers such as Oxfam, have branded themselves and the shift in culture that they have led to ‘expand relational ethic’ along with world-shops as retail outlets (Goodman, 2004), working and developing within an alternative economy like second-hand goods. Their branding is simple, recognisable and replicated internationally. One could argue that their logo, the iconic green font which they use effectively throughout their shops on tags, brochures and promotions, can be attributed more to their retail sector than the work the charity actually supports because of the immense amount of retail space they occupy on the high streets of the UK. “Oxfam’s identity has always seemed indistinguishable from retail since the organisation was created simultaneously with the opening of the historical store on Broad Street, Oxford in 1942. The origin of the company has been intrinsically rooted in retail activity (Fox, 1998)” (Girod, 2003, p. 518). With over 700 shops (Oxfam 2014) nationally they are leading the way for charity shops on the high street as well as online. “The external perception of a brand cannot be positive if identity-elements such as vision and culture are not aligned with the retailer’s externally perceived image” (Girod, 2003, p. 520). Oxfam’s trading division went from going bankrupt in 1991 to becoming a successful charity shop brand because of their ability to motivate volunteers and clearly pitch their vision to volunteers who could then confidently carry on the message throughout their time in shops.

Branding Offers Collins-Dodd & Lindley, 2003; and Keller, 2003 believe that for-profit retail branding strategies focus on building retail consumers’ shopping experience in opposition to competing retailers by offering what the retailer believes is the most favourable offer which distinguishes them between one seller and another. In a charity retailer, this remains quite difficult as St Wilfrid’s could not premeditate what kind of

Charity Retail Sector and Branding “Charity Retail Practice has evolved, from a basic social model of collecting surplus goods from wealthy households to help less fortunate or poorer households into the sophisticated marketing and merchandising of goods and services to the

KTP Associates Conference 2014

60


donations would come in through the door. So we reevaluated our stock to determine what our 3 main offers were evaluating what came in the door most often, sold the best and were capable of standing alone as a central brand offering for a sustainable period of time. After attending a London College of Fashion course on Visual Merchandising and the Exploring Brand Opportunities course at Central Saint Martins, the knowledge obtained from these leading experts in the UK on fashion and branding, was brought back to Eastbourne and developed into a sub brand for the retail arm of the charity. ‘Pre-loved, Brand New, Vintage’ was introduced in October, 2013 alongside other strategically placed marketing material in 2 of our shops to test the offering. Pre-loved had a tag line of ‘clothing and collectables,’ Brand New ‘Handbags and Accessories’ and Vintage ‘Locally sourced’ to begin trying to add value to our existing donated goods. We paired the text with innovative designs for each: ‘Pre-loved’ having a fun and vibrant background that consisted of falling hearts in our trademark teal colour that linked the offer to the main charity; ‘Vintage’ has a flower wallpaper like background that is trendy and stands out; while the ‘Brand New’ offer stands against a bold teal background that can be replicated easily and noticeably. Within 4 weeks, our bought in goods sales went up by 82% and customers responded positively to the changes, although some managers and volunteers did not. Shifting between charity culture and the for-profit retail culture did not resonate with how they identify the selling spaces.

p.4). In order to use our branding to try to addvalue to how donated goods were perceived by the local demographic we applied many branding strategies from high street retailers such as Primark, H&M and Marks and Spencer. Cline argues that brands such as H&M, Zara and Forever 21 are known as fastfashion retailers, who are experts in constantly stocking new trends and know exactly how to hook customers into shopping more regularly” (2013, p. 2). Using these retailers as examples we set out to use similar symbols in our marketing campaigns, including using imagery of who we perceive to be our ideal customer in our ads. To add on to the local edge we hoped to deliver and still appeal to the charity shopper, we chose to use a St Wilfrid’s Hospice employee. Kerry, began working in St Wilfrid’s in March 2013 and immediately brought up sales in our furniture shop because of her positive attitude and instinctive knowledge of antique furniture which stemmed from a lifetime working with her antique dealer father in the local area. She is 40 years old and is constantly charity shopping for ageappropriate, trending and fine quality items. She was the perfect candidate to represent the retail offer for St Wilfrid’s.

Branding Elements Aaker and Joachimsthaler (2000) believe that targeting specific customers in a niche market, avoiding an offering one cannot deliver, appropriately advertising new offerings, and using powerful names to distinguish product classes that stand out (i.e. pre-loved vs. donated) are key to a successful brand. All of which St Wilfrid’s followed as a guideline in our strategy planning in addition to adopting what Muzellec and Lambkin (2009) call a ‘brand separation strategy’ which allows the brand to be flexible. This was especially important because we could not always advertise specific items in the shop all the time. So, being able to say that we always have collectable goods and we locally source vintage goods was a huge advantage to the brand development because it allowed us to use all of those elements of our branding tools and visual merchandise planning. Holland (2006) describes how PBS (the Public Broadcasting Station) in the US created a lasting identity that was simple but also added-value to their services which were also significantly different from other broadcasting stations down to ‘the images, colour, format, and typography’ (p.3), which set it apart from other networks. He recalls that their brand identity was designed by C&G who are known

Horne (2000) suggested that charity retail stores in the United Kingdom can be divided into three categories according to their merchandising strategy: 100% donated merchandise, mixed merchandise (some donated and some bought-in new), and 100% bought-in new merchandise (Liu et al. et al p.4, Horne & Broadbridge, 1995). Some, like Oxfam, have stores in each category (Horne, 2000). ‘The audiences for typical nonprofit business activities include donors (cash and in-kind), customers (shoppers), volunteers, and those who gain value from the support offered by the organisation’ (Jenkinson, Sain, and Bishop (2005). St Wilfrid’s wanted to use our branding to expand this audience. In an attempt to change perceptions of charity retail as a whole, we looked to target customers that normally would not go into a charity shop. ‘In a typical charity retail setting, most of the merchandise displayed on the retail shelves consists of donated goods’ (Liu et al. 2013.

KTP Associates Conference 2014

61


for outlining not only a logo but ‘various brand elements such as logo, tag line, typefaces, formats and colour palette- and ensures the integrity of the system over time’ (Holland, 2006, p.4). St Wilfrid’s Hospice underwent a similar identity development in the middle of the retail project so it remained imperative that once it was time to design the shop fits we would incorporate all the elements that went into the design of the hospice, tweaking some aspects to highlight the retail offer. We used the same design company for our sub-brands and worked closely with the marketing director at the hospice to ensure a connection to the mother brand while also creating a separate identity of our own.

compete with each other on the price of manufacturers’ brands so in order to increase store traffic they introduced their own brand. St Wilfrid’s have essentially done the same thing.

Conclusion When it comes to competing with other charity shops on the high street, we quickly realised that the Charity Retail sector was one that was capable of reacting fast to change. With the autonomy to change prices radically with little planning, ordering in local marketing which aimed to target consumers on specific high streets with ease and promote their charitable achievements, helped to market their shops during different times of the year. “To succeed in the long run, a brand must offer added values over and above the basic product characteristics, if for no other reason than that functional characteristics are so easy for competitors to copy” (de Chernatony et al. 2011 p.35). We needed to install semipermanent brand fixtures into the shop that distinguished us not only from other charity shops but from our Hospice work as well so we can begin to target high street shoppers that would normally not be interested in buying second hand goods. We had to market our products differently. With the new sub brands in place and advertised in large vinyl print on our windows we began to expand on these offers. We used them in our visual merchandising and space planning around the shop, separating the merchandising mix by brand as well as category. For example, we have a ‘Brand New’ goods section: within this section lies handbags, accessories and jewellery, men’s wallets and shoes and an assortment of clothing. Ordering in tags that replicated the sub brands to match the category of offers helps St Wilfrid’s distinguish between ‘Pre-loved’ and ‘Vintage.’ We then designed our new carrier bags with the new sub brand to advertise outside of the shop. According to de Chernatony et al (2011) the reason why distributor brands developed back in the 1870s was due to retailer’s inability to

KTP Associates Conference 2014

62


Bibliography

Cross-Store Format Comparative Analysis." Available at SSRN 2246024 (2013).

Aaker, David A., and Erich Joachimsthaler. "The brand relationship spectrum." California Management Review 42, no. 4 (2000): 8-23.

Cline, E. 2012. Overdressed.. Penguin Group (USA) Incorporated.

Aaker, David A., and Erich Joachimsthaler. "Brand leadership: The next level of the brand revolution." New York (2000).

Day, George S., and R Wensley. "Assessing advantage: a framework for diagnosing competitive superiority." The Journal of Marketing (1988): 1-20.

Ailawadi, Kusum L., and Kevin Lane Keller. "Understanding retail branding: conceptual insights and research priorities." Journal of retailing 80, no. 4 (2004): 331-342.

Darden, W.R., Babin, B.J., 1994. Exploring the concept of affective quality: expanding the concepts of retail personality. Journal of Business Research 29, 101–110.

Barney, J.B., 1991. Firm resources and sustained competitive advantage. Journal of Management 17 (1), 99–120

De Chernatony, L, and F Dall'Olmo Riley. "Defining a" brand": Beyond the literature with experts' interpretations." Journal of Marketing Management 14.5 (1998): 417-443.

Blanckaert, P., Rincheval Hernu, A., Jacobs, D., Ammon, L. and Levesque, C. 2013. Icons of vintage fashion. New York: Abrams

De Chernatony, L, and S Segal-Horn. "Building on services' characteristics to develop successful services brands." Journal of Marketing Management 17, no. 7-8 (2001): 645-669.

Booth, William. In darkest England, and the way out. CH Sergel & Company, 1890. Bridson, K, Evans J, Mavondo F, and Minkiewicz J,. "Retail brand orientation, positional advantage and organisational performance." The International Review of Retail, Distribution and Consumer Research ahead-of-print (2013): 1-20.

Denning, S. 2013. J.C.Penney: Was Ron Johnson's Strategy Wrong?. Forbes Doyle, P, and J Saunders. "Multiproduct advertising budgeting." Marketing Science 9, no. 2 (1990): 97-113.

Broadbridge, A, and E Parsons. "UK charity retailing: managing in a newly professionalised sector." Journal of Marketing Management 19, no. 7-8 (2003): 729-748.

Fox, EJ., and R Sethuraman. "Retail competition." In Retailing in the 21st Century, pp. 193-208. Springer Berlin Heidelberg, 2006.

Broadbridge, A, and E Parsons. "Gender and career choice: Experiences of uk charity retail managers." Career Development International 10, no. 2 (2005): 80-97.

Girod, S. JG. "The human resource management practice of retail branding: An ethnography within Oxfam trading division." International Journal of Retail & Distribution Management 33, no. 7 (2005): 514-530.

Brodie, RJ, J RM Whittome, and GJ. Brush. "Investigating the service brand: a customer value perspective." Journal of Business Research 62, no. 3 (2009): 345-355.

Goodman, MK., and R Bryant. "Placing the Practices of Alternative Economic Geographies: Alternative Retail, the Spaces of Intention and Ethical Ambiguities." (2013).

Brodie, RJ. "From goods to service branding An integrative perspective." Marketing Theory 9, no. 1 (2009): 107-111.

Gregson, N, L Crewe, and K Brooks. "Discourse, displacement, and retail practice: some pointers from the charity retail project." Environment and Planning A 34, no. 9 (2002): 1661-1684.

Burt, S, and K Davies. "From the retail brand to the retail-< IT> er</IT> as a brand: themes and issues in retail branding research." International Journal of Retail & Distribution Management 38, no. 11/12 (2010): 865-878.

Parsons, Liz. "New goods, old records and second-­‐hand suits: charity shopping in South-­‐ West England." International Journal of Nonprofit and Voluntary Sector Marketing 5, no. 2 (2000): 141-151.

Calvo P Cristina, and J-P L-Mangin. "Determinants of Store Brands' Success: A

KTP Associates Conference 2014

63


Hall, C. M Michael, and RMitchell. Wine marketing. Routledge, 2012.

Neff, D J., and R C. Moss. The Future of Nonprofits: Innovate and Thrive in the Digital Age. Wiley. com, 2011.

Hibbert, S A., S Horne, and S Tagg. "Charity retailers in competition for merchandise: examining how consumers dispose of used goods." Journal of Business Research 58, no. 6 (2005): 819-828.

Pudelek, J. 2013. Analysis: Charity shops change direction | Third Sector. [online] Available at: http://www.thirdsector.co.uk/Fundraising/article /1222084/analysis-charity-shops-changedirection/ [Accessed: 29 Jan 2014].

Holland, D. K. Branding for nonprofits. Allworth Press, 2006. Horne, S. "The charity shop: purpose and change." International Journal of Nonprofit and Voluntary Sector Marketing 5, no. 2 (2000): 113-124.

Parsons, E. "Charity retail: past, present and future." International Journal of Retail & Distribution Management 30, no. 12 (2002): 586-594.

Horne, S, and A Maddrell. Charity shops: Retailing, consumption and society. Routledge, 2004.

Parsons, E, and A Broadbridge. "Managing change in nonprofit organizations: Insights from the UK charity retail sector." Voluntas: International Journal of Voluntary and Nonprofit Organizations 15, no. 3 (2004): 227242.

Jara, Mi, and G Cliquet. "Retail brand equity: Conceptualization and measurement." Journal of Retailing and Consumer Services 19, no. 1 (2012): 140-149.

Parsons, E. "Charity retailing in the uk: A typology." Journal of Retailing and Consumer Services 11, no. 1 (2004): 31-40.

Jenkinson, A, B Sain, and K Bishop. "Optimising communications for charity brand management." International Journal of Nonprofit and Voluntary Sector Marketing 10, no. 2 (2005): 79-92.

Parsons, E. "Charity shop managers in the UK: becoming more professional?." Journal of Retailing and Consumer Services 11, no. 5 (2004): 259-268.

Liu, G, TY Eng, and Y K Sekhon. "Managing Branding and Legitimacy: A Study of Charity Retail Sector." Nonprofit and Voluntary Sector Quarterly (2013).

Parsons, E, and A Broadbridge. "Job motivation and satisfaction: Unpacking the key factors for charity shop managers." Journal of Retailing and Consumer Services 13, no. 2 (2006): 121-131.

Maddrell, A. "‘You just can't get the staff these days’: the challenges and opportunities of working with volunteers in the charity shop–an Oxford case study." International Journal of Nonprofit and Voluntary Sector Marketing 5, no. 2 (2000): 125-139.

Torres-Baumgarten, G, and V Yucetepe. "Are Consumer Perceptions Of Retailers Aligned With Retail Store Positioning?." Journal of Business & Economics Research (JBER) 5, no. 12 (2011).

McEnally, M, and L De Chernatony. "The evolving nature of branding: consumer and managerial considerations." Academy of Marketing Science Review 2.1 (1999): 1-16.

Starr, P. "Ethical consumption, sustainable production and wine." Ethical consumption: a critical introduction (2011): 131-140.

Muzellec, L, and MC. Lambkin. "Corporate branding and brand architecture: a conceptual framework." Marketing Theory 9, no. 1 (2009): 39-54.

KTP Associates Conference 2014

64


Can corporations ensure fair wages, safe working practices and environmental restoration? Rose Dunne Birmingham City University, Parkside 5 Cardigan Street Birmingham B4 7BD The global supply chain is a complex system. There is a rise in consumer demand for Fair Trade and environmentally sustainable products. This paper will discuss how branding and advertising can be used to honestly convey the positive changes corporations are beginning to make for the empowered consumer. Time and money is invested in developing brand imagery to communicate trust, but can image alone achieve this? Nike invested millions into advertising, but the ‘sweatshop scandal’ caused a loss in sales and Nike has had to change their unethical practices to sustain their market. I argue, that for a brand to truly build a relationship of trust with its consumer, it must be transparent and honestly promote positive social changes made by the company or corporation. Firstly, I will introduce how collaboration between NGO’s, campaign groups, civil society organisations and the global corporations set up to make profits, can ensure fair wages and safe working practices. Secondly, I will argue that advertising and branding can play a role in promoting the positive contributions corporations make to improve working practices and the natural environment. Finally, I will refer to my KTP project and discuss the ideas I have to promote the positive changes being made within the gold industry.

The Fair Labour Association As a consumer, we have the power to bankrupt a business or see its expansion. Our decision to buy products is informed by billions spent on advertising, but also by our awareness of brands’ supply chains and ethical stance. In 2002 Nike joined the Fair Labour Association in response to pressure from consumers. Many consumers were dissatisfied with the working practices of Nike Sweat shops.1 Tiger Woods was paid $100,000 for his appearance in a Nike commercial, whilst Nike’s sweat shop employees were paid $1.25 an hour to work in unsafe and unsanitary conditions.2 The Fair Labour Association works with corporations to end ‘abusive labour practices’ by “offering tools and resources to companies, delivering training to factory workers and management, conducting due diligence through independent assessments, and advocating for greater accountability and transparency from companies, manufacturers, factories and others involved in global supply chains”.3

In 2010 Auret van Heerden, former CEO of the Fair Labour Association, spoke at TED: ‘We’ve been able to harness the power and the influence of the only truly transnational institution in the global supply chain, that of the multinational company, and get them to do the right thing, get them to use that power for good, to deliver the key public goods. Now of course, this doesn’t come naturally to multinational companies. They weren’t set up to do this. They’re set up to make money. But they are extremely efficient organisations. They have resources, and if we can add the will, the commitment, they know how to deliver that product […] The problem is the lack of trust, the lack of confidence, the lack of partnership between NGO’s, campaign groups, civil society

KTP Associates Conference 2014

65


organisations and multinational companies. If we can put these two together in a safe space, get them to work together, we can deliver the public goods right now.’ 4 This is the framework I am working from; the idea that the corporations with the power and money to make positive social change, can be persuaded to do so. The Fair Labour Association works with 4000 companies. They publish reports on the developments made by the companies they work with. There are a number of safe guards formed by the Fair Labour Association (FLA), including a Third Party Complaint procedure that enables individuals, groups or organisations to formally complain if the work place code of conduct is breached. The FLA provides courses and training to help corporations improve their working practices. 5 In addition to the positive changes being carried out by the FLA, a demand for a Fair Trade market is increasing and the Fair Trade Label promises fairer prices, safe and sanitary working conditions and local sustainability for workers.6 “Estimated retail sales of Fair Trade products in 2012 reached £1.57 billion, a 19% increase on sales of £1.32 billion in 2011 […] Sales of Fair Trade products in 2013 rose again, reaching an estimated £1.78 billion, up 14 per cent on the previous year.”7

Changes to branding and advertising

Branding could be transparent and capture positive social changes made by corporations, to encourage consumers to buy from ethical and sustainable brands. Where corporations do not follow these practices, changes could be made. ‘Pioneers made the bold claim that producing goods was only an incidental part of their operations, and thanks to recent victories in trade liberalization and labour-law reform, they were able to have their products made for them by contractors, many of them overseas. What these companies produced primarily were not things, they said, but images of their brands. Their real work lay not in manufacturing but in marketing’. Naomi Klein, ‘No logo’8 Branding and advertising persuade us to believe that when purchasing a product, we align ourselves with the feelings and ideas associated with the product. Once our survival needs have been met, further consumption is required to satisfy our egos. To enable us to define the tribe we belong to, to give us a greater sense of who we are and what we like. Effective branding communicates directly with our egos. We are creative beings; we can feel emotions such as ‘happiness’ without over consumption of products. But instead, we let the advertisers do the imagining for us, and we choose to purchase the feeling. Advertisers and brand designers could communicate with our ‘moral’ egos, the part of us that wants to be seen to care - and in reality, we do. I would argue that we are compassionate beings; altruism ensures our survival and happiness. 9 Most consumers feel dissatisfied when they purchase a product that opposes their moral beliefs; we only have to look at Nike’s loss of sales after the ‘sweatshop scandal’ to know this.10 Corporations, who condone unethical working standards and exploit our planet’s natural resources seek one thing, profit, and I argue, being moral pays. The internet plays a role in educating consumers about product supply chains and brands’ unethical practices. Social media enables consumers to voice their opinions and brands have a limited control over what consumers say, which is why corporations have no choice but to be honest. If companies were to invest some of their advertising budget to replant trees, a photograph of a forest with the brands logo could be shown. People continue to recognise the brand and feel good about purchasing from the brand. The company’s profits

KTP Associates Conference 2014

66


could be sustained or increased, whilst making a positive change. If the company were to improve the working environments for its workers, advertisers could document the process. The role of branding and advertising could move away from manipulation and become a vehicle for capturing honest, positive social changes made by corporations. For example, free art workshops and further education, organised by companies could be depicted alongside their products. Advertising could be used to remind consumers about actions made by brands to better our world. People will feel happier to consume. Brands can maintain trust by giving back, rather than being preoccupied with maintaining illusions.

Fair trade within the Jewellery industry I work with a Jewellery Company as a brand designer. I have researched the gold supply chain and am currently promoting Fair Trade gold products.

‘In the late 1990s and early 2000s, brutal conflicts in Sierra Leone and Liberia, fuelled by trade in blood diamonds, cast a spotlight on the Jewellery Industry’.11 Fair Trade Gold became a prevalent issue in the early 2000’s; the film ‘Blood Diamonds’ increased consumer awareness of the painful realities of the diamond industry, in 2006 12 NGO campaign groups and consumers have pressured the gold industry to become transparent, so that gold can be purchased with the knowledge that miners, their communities and natural environment have not been exploited. Many of us will be aware of the dangerous mining practices, child labour and land devastation caused within the gold mining and diamond industry. NGO campaigns exposed this reality and demanded ‘conflictfree Jewellery’. Fair Trade gold ensures education for the miner’s children, restoration to the local environment, fair wages and safer working conditions. ‘The Fair Trade Foundation research shows that consumers believe buying Jewellery for a special occasion would hold greater value and significance if this carried the Fair Trade and Fair Mined dual label.’

Global supply chains are complex systems; it is difficult to monitor where all materials are sourced and how they are made. For instance, in the Jewellery sector, it’s not just gold that is sourced unethically, but silver, solder, palladium, platinum and rhodium (a solution used to plate gold rings). ‘An NGO ‘Fair Trade Gold’ movement has surfaced, it’s crystallization fuelled by a burgeoning body of evidence that points to impoverished artisanal miners in developing countries receiving low payments for their gold, as well as working in hazardous and unsanitary conditions […] Artisanal gold mining activities are far more widespread and illegal. Any attempt to liaise with such miners, therefore is bound to encounter resistance from host governments.”13 The Fair Trade Gold movement also formed in response to the success of fairer wages for workers on banana, tea, coca and coffee plantations. Ensuring fair wages and safe working standards in the gold mining industry is complex because many gold mining practices are illegal. Artisanal mining must be legalized by host governments, to ensure gold mining cooperatives can become ‘Fair Trade’, in order to pay fair wages. 14 However it is not always in the interest of host governments to legalize artisanal mining practices because many governments profit from artisanal mining’s current ‘set up’. Host governments are ‘more concerned with capturing this gold than with improving the working conditions of artisanal miners’. 15 Artisanal miners do not sell their gold to end retailers, but to buyers who visit mining sites or are located nearby, who sell to traders before exportation. The gold is turned into bullion, consisting of 99.5% pure gold, which is sold to manufacturers.

KTP Associates Conference 2014

67


‘The traditional structure of the supply chain means that gold from several sources may be refined in the same batch, and it has been historically impossible for consumers to know where the gold in an item of Jewellery was mined’.

The company I work with are currently establishing themselves within the Fair Trade Jewellery market. They are FLO certified and I am working to brand and advertise their Fair Trade products. The company’s retailers must be registered under the Fair Trade label. Fair Trade Jewellery is an emerging market; becoming entirely Fair Trade will be a gradual process. As consumers, we are often detached from a product’s journey and are unable to see the processes it undergoes. I have photographed the jewellery making process to capture the journey of the ring and the people who make it. Supply chains could be shown through photography and film as a form of advertising, to communicate how the product is sourced or made. Jewellery can be photographed on people engaging in positive change, rather than staring blankly at a camera. In addition to promoting fair trade products, I have devised a brand strategy called the ‘back to back’ project. During the industrial revolution, the jewellery industry expanded in Birmingham, and forests were stripped away to build back-to-back terrace housing for factory workers. My idea – every time a ring is sold a tree is planted. I contacted woodlands trust - it costs £10 to plant a tree in one of their forests. That’s £10 added onto the ring, or £10 less spent on printing paper cards and booklets selling the ring. Or the tree could be replanted in an area where deforestation occurs due to gold mining. It is essential that we restore the natural environments of miner’s communities. One 18ct gold ring creates 20 tonnes of earth waste.16 There is the parallel contradiction that whilst this knowledge may prevent consumers from buying wedding rings - gold is the miner’s livelihood. Without gold, they are without work - and the gold industry employs over 100 million people annually. The increase in gold prices has seen an influx in people working in the gold mining industry. 17

To bring light to this contradiction, as a consumer, refusing to buy gold does not help miners either. 15 million people across Africa, Asia and Latin America work in artisanal mining, because working in agriculture or other industries is not an option. The ‘lack of transparency’ in the gold supply chain, means that it is difficult for consumers to have

KTP Associates Conference 2014

68


knowledge of the conditions gold was mined in. The gold industry is under pressure by consumers to evidence gold mining practices, and the Fair Trade label ensures strict regulations are followed. Many online publications and social media pages support ethical mining practices. The Fair Label Organisation and Association of Responsive Mining work within Latin America, and are expanding their practices to work within Africa and Asia. Gold miners working for the Fair Trade organisations receive ‘10% of the internationally agreed price of gold’.19

‘$137 billion was spent on gold Jewellery in 2010, making it one of the world’s largest categories of consumer goods’.20 Companies who do not pay fair wages or care for our planet, are at risk of losing their reputation as a trusted brand, and a brand’s reputation is everything. Once Fair Trade becomes mainstream, companies will be under pressure to make changes. The problems are complex, but to refer back to Auret van Heerden’s point: NGO’s, corporations and human rights activists can and do work together to form positive social change. To conclude with Naomi Klein’s assertion that: ‘Since many of todays best-known manufacturers no longer produce products and advertise them, but rather buy products and ‘brand’ them, these companies are forever on the prowl for creative new ways to build and strengthen their brand images’21 What better a way is there to ‘strengthen a brand image’ than by improving the lives of others and our natural world?

Thank you to David Osbaldestin (BCU, Visual Communication Lecturer) and Sam Stevens (Hockley Mint, Marketing Supervisor) for their support.

KTP Associates Conference 2014

69


1

http://www.fairlabor.org/ https://www.youtube.com/watch?v=M5uYCWVfuPQ 3 http://www.fairlabor.org/ 4 http://www.ted.com/talks/auret_van_heerden_making_global_labor_fair 5 IBID 6 http://www.fairtrade.org.uk/what_is_fairtrade/default.aspx 2

7

http://www.fairtrade.org.uk/press_office/press_releases_and_statements/february_2013/fairtrade_bucks_econo mic_trend.aspx 8 No logo, Niaomi Klein, p 4 9 Altruism, In Our time 10 http://www.theguardian.com/world/2001/may/20/burhanwazir.theobserver 11 http://www.fairtrade.org.uk/press_office/press_releases_and_statements/february_2013/fairtrade_bucks_econ omic_trend.aspx 12 http://www.fairtrade.org.uk/gold/ 13 ‘Fair trade gold’: Antecedents, prospects and challenges, P 386 14 ‘Fair trade gold’: Antecedents, prospects and challenges 15 Fair trade gold’: Antecedents, prospects and challenges, P392 16 http://www.fairtrade.org.uk/includes/documents/cm_docs/2011/F/Full%20Policy%20of%20Policy%20Report.pd 17 http://nodirtygold.earthworksaction.org/library/detail/how_the_20_tons_of_mine_waste_per_gold_ring_figure_ was_calculated#.U3DKtK1dVCM 18 http://www.fairtrade.org.uk/includes/documents/cm_docs/2011/F/Full%20Policy%20of%20Policy%20Report.pdf 19 http://www.fairtrade.org.uk/includes/documents/cm_docs/2011/F/Full%20Policy%20of%20Policy%20Report.pdf 20 http://www.fairtrade.org.uk/includes/documents/cm_docs/2011/F/Full%20Policy%20of%20Policy%20Report.pdf 21 No logo, Niaomi Klein, p 4

KTP Associates Conference 2014

70


The Role of Research and Development in Developing New Products within a Medium Sized Analytical Manufacturer Jennifer Hulse1, Melvin R Euerby1, Alan Bassindale2 and Peter Taylor2 1 2

Hichrom Ltd, Unit 1, Markham Centre, Station Road, Theale, RG7 4PE

Faculty of Science, Department of Life, Health and Chemical Sciences, The Open University, Walton Hall, Milton Keynes, MK7 6AA

Abstract Hichrom is a medium sized company which specialises in the distribution and manufacturing of silica columns for High Performance Liquid Chromatography (HPLC). A column is a metal, hollow tube which is filled with silica particles, which can have varying properties that are beneficial to the end user. This is done by altering the surface of the silica with structures called silanes. These silica based columns are used to separate, identify and quantify analytes within a mixture, due to the analytes differing interactions with the silane. They are used widely in industries such as medical, pharmaceutical, research and manufacturing. The aim of the project is to design and produce six novel silanes with differing properties, then introduce these products to the market. As an expanding organisation, Hichrom utilises the KTP format to enhance their understanding of organosilica chemistry from the Open University and works to improve taking a concept all the way through to completion. This paper discusses the project from an R&D perspective where all stages from feasibility to commercialisation must be monitored carefully in order for the smooth running of the project. This includes the tools implemented by the project manager to create not only products within budget and time but maintaining high levels of quality.

Introduction

• •

It is known that there is a huge variety of stationary phases available for reversed phase High Performance and Ultra High Performance Liquid chromatography (HPLC and UHPLC). When deciding on which novel or improved silanes will be targeted in the KTP project there are a number of considerations that need to be taken into account.

These questions do not have a simple straightforward answer and in some cases it is not possible to get an unbiased response due to intellectual property rights (IPR) and collaborations between specific instrument and column manufacturers. However there are a number of tools that can be utilised to gain a deeper upstanding into how to answer these questions, which will be discussed later in the paper.

In addition to understanding the chemical considerations for the type of ligand to be developed there are also a number of questions concerning practical and financial implications that need to be evaluated when deciding on which novel silanes to target: •

• • •

Market Analysis

What phases do customers want and what applications do they use them for? What phases are already in the portfolio and what can be developed? What will sell? Who are the competitors?

KTP Associates Conference 2014

What is a quality product? What phases are cost-effective to produce?

The goal of the R&D team is to develop financially viable chromatographic stationary phases (i.e. for reverse phase, normal phase, supercritical fluid and Hydrophilic interaction chromatography in addition to solution specific phases, such as a sugar phase) that would be

71


uniquely placed (i.e. possessing enhanced selectivity, inertness, stability, chromatographic performance) in the current market giving the product a competitive edge over existing offerings.

KTP evaluation 6th Dec 2010 acid data minus amide.M1 (PCA-X) t[Comp. 1]/t[Comp. 2]

8

6

4

t[2]

• •

• •

ACE C18-PFP

PEG

ACE CN

Hydrophobicity ACE C18-AR

Hydrophilicity / Polarity / Aromatic ACE C4

-2

ACE C8

ACE C18-HL

ACE C18

ACE Phenyl

-4

-6

Voice of industry leader/ visionaries/customer questionnaires Intelligence of preferred column lists and methods development strategies as used in leading companies. Mapping our current portfolio of products to feedback from technical and sales enquiries as well as Hichrom conferences and training days. Attendance at national and international conferences to identify “hot topics” and unmet needs. Evaluating the sales of our distributed products to identify purchasing trends. Extensive in house experience of the needs and requirements within the pharmaceutical arena. Collaborative ventures with academic and instrument manufacturers (i.e. prototype testing of equipment which permits us an early heads-up of what new columns the new instrument will require.) Database of competitor product trends and sales. Extensive in-house expertise in evaluating chromatographic stationary phase (i.e. assessing the advantages and disadvantages of competitor phases) using an array of validated protocols). Extensive in-house chromatographic knowledge of the retention principles associated with “lead” products.

Dipole

EPS

(loose bonding lots of acidic and non acidic silanol groups)

Polymer

High hydrophobicity (low shape/ anionic / dipole)

-10

0

10

t[1]

R2X[1] = 0.541119

R2X[2] = 0.25543

Ellipse: Hotelling T2 (0.95) SIMCA-P+ 11.5 - 06/12/2010 14:10:05

Figure 1 Principal Component Analysis of the Hichrom portfolio before the KTP There are limitations to this PCA, such as not accounting for ionic exchange interactions or the lack of indication of temperature or pH stability; however, in conjunction with other sources of research, the PCA provides a neat overview of what columns are required to improve Hichrom’s portfolio. As with all project, it must be considered which novel silanes will create the biggest return on investment against the resources required to develop and manufacture the column. From Figure 2, the first choice of phase would be the “low hanging fruit”, which are cheap or simple to manufacture and have gross interest from consumers. The complex silanes with very limited interest should be avoided due to the limited return on investment.

The data produced is mined and evaluated using chemometric tools such as Principal 1 Component Analysis (PCA) (Figure 1) and 2 management tools such as SWOT analysis and Analytical Hierarchy Process (Figure 3) to assist management in their decisions.

Figure 2 Prioritisation scheme for Novel Silanes identified during Market Analysis.

PCA is a good tool to identify gaps within Hichrom’s column portfolio. The plot suggests various beneficial phases which could be developed through the KTP to present a broader portfolio.

KTP Associates Conference 2014

Phenolic Conjugation

ACE PFP

0

-8

(dense bonding But lots of non acidic silanol)

2

In order to achieve this, the R&D’s focus and direction is influenced by data from numerous sources. These include: •

Shape Anionic character

EPS

72


Feasibility, Development

Research

and

However, in order to achieve this model, sufficient resource such as instruments and Full Time Equivalents (FTE’s) must be ring fenced for all the process stages. With good project management, this can be achieved.

R&D aim to work within the Analytical Hierarchy Process (Figure 3). In order to maintain this ideal working model, one needs to ensure that there is a balance between research, development and commercialisation, otherwise the model falls down and one cannot fuel the launch of new products. If this model is adhered to, there would be a steady stream of products on to the marketplace.

Once the required product characteristics have been identified, then, in collaboration with our expert organosilicon chemist (using the Open University facilities), a range of candidate silanes are unambiguously synthesised and bonded onto silica before being subjected to extensive testing within the well-equipped and resourced R&D laboratories at Hichrom.

The quality of the candidate products is assessed by extensive testing (Figure 4) and benchmarking against competitor columns to assess their column chromatographic characteristics (using chemometrics), selectivity, inertness, stability (acid, base and temperature, high pressures), phase bleed and collapse in aqueous conditions, linear solvation energy relationships to dissect out their retention mechanisms. Chromatographic performance is assessed by traditional van Deemter plots as well as the newly described kinetic plots. The most promising candidate

Stage

1 Project CommercialisaAon 3 Projects Development

6-­‐9 Projects Research/Feasibility

Number of Projects Figure 3 Ideal overall R&D strategy

TANAKA

•Full Tanaka characterisation to determine Hydrophobicity, H-­‐bonding, S teric Selectivity and any Ion Exchange interactions on the stationary phase •Nitros, m ethoxys and di-­‐nitros characterisation to indicate π-π and dipole-­‐dipole interactions

PFP QC

•Selection of charged and neutral, acidic and basic m olecules w ith varying degrees of aromatic nature •Indicates any residual charge of the stationary phase along w ith distinguishing π-π interactions and shape selectivity •Run at start and end of quality testing to observe any changes during characterisation suite

ACIDS

•Isocratic analysis to distinguish between pKa and steric arrangement of various acids •Identifies differences in phenolic, hydrophobicity and hyrdophilicity characterisation •Indicates likely shape selectivity and ion exchange nature on stationary phases

BASES

•Gradient and isocratic analysis to distinguish between charged bases and hyrdophilic compounds that w ill exhibit tailing easily •Identifies any ionic interactions and residual charge on the stationary phase •Due to high temperature analysis is a g ood indicator of column bleed

•Indicates stability of a phase in 100% aqueous m obile phase by changing between aqueous and organic phases

PHASE •Added drop in pressure and subsequent analysis w ill indicate any phase collapse w hen there is no pressure on the column COLLAPSE •Short g radient f or 100+ compounds to compare against current stationary phases to identify real life application m ixes f or possible m arketing strategy

•Chemometric analysis can correlate retention m echanisms and calculate selectivity indices of each analyte of different stationary phases 100 ANALYTES

•Accelerated stability studies are carried out in harsh conditions (pH 1.4 and 80ºC) to g ive approximation of column lifetime relative to other phases

FORCED •Further studies at pH 2.5 and pH 7.0 are carried out at 60ºC to m imic m ore realist column usage to show that the stationary phase is fit for purpose STABILITY

SILANE SPECIFIC

•Depending on nature of stationary phase new characterisation m ethods w ill be developed to characterise specific interactions i.e. H alogenated compounds for separations on pentrafluoro phenyl phases or high pH stability on polymeric alkyl phases.

Figure 4 Process flow for column characterisation and quality testing.

KTP Associates Conference 2014

73


phase is scaled up and tested for batch to batch reproducibility (with respect to bonding and silane batches). Once selected for development, large scale silane synthesis will be contracted out to a CRO (via a formal method transfer) and the quality of the scale up is assessed by high resolution multinuclear NMR and GC/MS at the Open University.

the chemistries as shown in Figure 5) specifically designed for UHPLC use at high pressures. During the last two years we have seen a surge of interest in the use of superficially porous silica particles, as a consequence Hichrom have developed two phases (i.e. ACE Ultracore range - Super C18 and Super phenyl hexyl chemistries) which provide low and high pH stability coupled with the high efficiencies associated with these superficially porous phases at high pressures.

The “linchpin” of the R&D strategy lies in developing suites of stationary phases for differing chromatographic modes (i.e. RP, HILIC or SFC). Each phase within each suite possesses complimentary chromatographic selectivity to one another (Figures 5 – 7). Most chromatographic methods are developed using column screening strategies as described below hence if we can provide the chromatographers with a specifically designed method development kit, it will increase the probability of them developing a validated chromatographic method on one of these phases. Once the column is described in a regulatory method then multiple column orders can be guaranteed.

As a spin off from our research we have been able to offer two other products for external licencing opportunities as they did not fit our desired selectivity profile. In addition, one of our HILIC phases demonstrated excellent properties for the analysis of sugars – hence this is also being developed and labelled as a Sugar specific column. Over the last two years all the major chromatographic conferences have had a big focus on HILIC chromatography for the analysis of polar molecules (i.e. those not suited to RPLC) and SFC for analytes which straddle being analysed by either RP or HILIC modes. Hence, Hichrom has not been slow in developing a suite of complimentary phases for both HILIC and SFC use (see Figures 6 and 7).

C18 C18-­‐ EPS

C18-­‐AR Super C18

Column switch C18-­‐ PFP

CN-­‐ES C18-­‐ Amide

Silica

Column switch

Figure 5 ACE reverse-phase suite of complimentary phases (C18 is an existing phase, C18-EPS is in development)

Diol

If the desired selectivity cannot be achieved at low or intermediate pH using the existing suite of phases then we have developed a Super C18 Phase which can be used to exploit selectivity differences at high pH.

Figure 6 ACE HILIC suite of complimentary phases (all phases in development) 2-­‐Ethyl pyridine

During the last 8 years there has been an increasing emphasis and interest in developing small particle size silica products to take advantage of the new UHPLC instrumentation (i.e. increase speed of analysis without loss of data quality or increased quality of data with no loss in speed). Hence, Hichrom has developed a 2 µm range of phases (Known as ACE Excel in

KTP Associates Conference 2014

Amino

Column switch

Cyano

Silica

Figure 7 ACE SFC suite of complimentary phases (all phases in development)

74


Future research is focussing on optimising polymeric bonding to develop highly reproducible and stable phases for use in all modes of chromatography. The development of stable and reproducible mixed mode phases (i.e. C18/SAX and C18/SCX) and phases for biomolecules (i.e. size exclusion and RP columns) is also planned.

within the KTP time frame and eight potential phases which are close to commercialisation. As previously described, the objective of the KTP project was to release six phases, and as can be seen from Table 1, this has been achieved and more. There have been eight phases released, two available as a private label and eight phases which we are hopeful shall be launched due to the work carried out during the KTP.

As seen within Table 1, there are numerous phases which have been put onto the market

Product C18-AR C18-PFP C18-Amide CN-ES C18-EPS Excel range including 2µm products Super C18 Ultracore SuperC18 Ultracore Super Phenylhexyl HILIC Silica HILIC Diol HILIC Amino SFC Silica SFC CN SFC 2-Ethylpyridine Sugar specific phase Stable bond CN Phenyl Hexyl Table 1

Use/Comment RPLC RPLC RPLC RPLC RPLC All of the ACE phases available including the above RPLC Superficially porous RPLC (High and low pH) Superficially porous RPLC (High and low pH) HILIC HILIC HILIC SFC SFC SFC Sugar Analysis RPLC or SFC or NPLC RPLC

Status On The Market On The Market On The Market On The Market 4Q 2014 On The Market On The Market On The Market On The Market 3Q 2014 3Q 2014 3Q 2014 4Q 2014 4Q 2014 4Q 2014 3Q 2014 Available for Private Label Available for Private Label

Summary of R&D developed and projected products.

Quality

controlled and given the correct amount of resources in order to accomplish goals.

Hichrom prides itself on the quality of the products they sell, so it is therefore important to maintain such standards. This is acknowledged by receiving ISO 9000 and ISO 9002.

With good project management, using tools such as Agile or Prince2 methodologies, multiple projects can be run concurrently which enables constant release of products and objectives can be met, or in this instance, exceeded.

Due to the highly changeable nature of R&D, 3 the Agile Project Management style was adopted to ensure that the project was completed to time, to budget and as such, never compromises on quality.

It must be appreciated that not all concepts will progress onto commercialisation and an agile, fluid work environment is required. Maintaining quality is a key objective within Hichrom, therefore meaning higher expectations of our products to be the best within the market.

Conclusion As discussed within this paper, there are many facets within R&D, all of which must be

KTP Associates Conference 2014

75


References 1

H. Abdi and L.J. Williams, WIREs Comp Stat: Principal component analysis, 2010, 2: 433–459.

2

G Houben, K Lenie and K Vanhoor, Decision Support Systems, 1999, 26, 125-135

3

DSDM Consortium, Agile Project Management Handbook, Kent, V. 1.1, 2012

KTP Associates Conference 2014

76


Oil concentration range extension of an oil in water instrument based on nephelometry B. Oger1,2, C. Durkin1, C. Redstone1, M. Coomber1, C. Crua2, G.J. Awcock2 1

Rivertrace Engineering Ltd, Unit P, Kingsfield Business Centre, Philanthropic Road, Redhill, RH1 4DP 2

University of Brighton, Cockcroft Building, Lewes Road, Brighton, BN2 4GJ

Abstract Nowadays for environmental and economic reasons, knowing oil concentration in water has become fundamental for the oil and merchant marine sectors. The oil industry produces and uses a large quantity of water from oil fields, tankers to refineries. During these different processes, the oil concentration in water varies from thousands of parts per million (ppm) to a few parts per billion (ppb). Accurate instrumentation plays crucial roles in monitoring the different steps. Rivertrace is a worldwide recognized company providing oil in water measurement equipment. Their main instruments are based on an optical technique: nephelometry. From light absorption and scattering, oil and solid concentrations can be determined. Oil measurement is a challenging task; at high concentration (a few thousand ppm) the solution is optically too dense and at low concentration, absorbed light and scattering signals are weak. A second challenge is oil and solid particle differentiation, as solid particles also perturb signal intensities. The third challenge is the difference of signal absorption and scattering according to oil types and oil droplet size distributions. To overcome these issues, different set-ups are used according to oil concentration ranges. In particular, the measurement volume decreases with the oil concentration range in order to avoid high absorption and to have accurate measurements. Oil droplets and solid particles are discriminated by recording scattering signals at different angles as solid particles and oil droplets have a different scattering pattern. The size distribution is controlled by a conditioning pump which homogenises the size of oil droplets to a few micron diameters. The Smart 50M is a newly commercialised product from Rivertrace with a range covering 0

KTP Associates Conference 2014

to 10 ppm, 0 to 200 ppm and 0 to 2000 ppm. The highest range needed to be extended in order to respond to the market demands. The aim of this work was to validate the 5,000 ppm limit or to achieve it with minimal modifications of the original equipment. Effects of the different parameters: measurement volume, optical fibre diameter, light source to cell distance, cell to sensor distance and sensor angle, were studied in order to validate and to increase the range without compromising on the accuracy of the equipment. The problem of “roll-over� of the scattering signal was overcome. The scattering signal increases with the increase of the oil concentration however for higher concentrations (above 1,500 ppm) the signal intensity decreases. The optimal configuration was determined empirically using red Diesel. The 5,000 ppm range was validated and the possibility to extend to over 9,000 ppm was characterised by slight modifications of the standard set-up without computational algorithm modification. The extension of the oil concentration range offers more market applications for the new product.

Background Rivertrace Engineering Ltd was formed in 1983 as a highly specialised company in the field of environmental pollution control. The company is specialised in Oil in Water (OIW) monitors for both industrial and marine applications. Rivertrace Engineering Limited is an ISO9001 quality assured company and market leader with over 30 years’ experience of OIW monitoring. OIW monitors are necessary and compulsory in the marine sector, for bilge and ballast water discharges from ships. Measurement instruments must comply with the International Maritime Organization (IMO) standards (resolutions MEPC 107(49) and MEPC 108 (49)) [1, 2]. Industrial sectors require OIW 77


measurements for boiler water, condensate cooling water, produced water and discharged water. For each application, concentration ranges and oil types are different. These specificities require tailor-made systems. Oil concentrations can vary from a few parts per billion (ppb) to a few thousands parts per million (ppm). In order to fill up a gap in the OIW market, high oil concentration measurements are needed for produced water. The Smart 50M (Fig 1), an in-house developed product, was designed to cover different concentration ranges [3] as follows: low range 0 to 10 ppm medium range 0 to 200 ppm high range 0 to 2000 ppm The 2,000 ppm oil concentration was an ongoing validation by the R&D department but pressure was coming from the sales division to further increase the maximum range up to 5,000 ppm in order to target more OIW monitoring applications. This was the background of the main objective. A secondary goal of this one month mini-project was to familiarize the KTP Associate with OIW instruments as well as optical and algorithm techniques involved. This project was an introduction to oil in water measuring challenges as part of a larger 2-year KTP project.

Figure 1 – Smart 50M - Commercialised product

Technique The Smart 50M instrument, in common with most of the equipment developed by Rivertrace, is based on infra-red (IR) nephelometry. Nephelometry is an optical technique based on recording both the absorption and scattering of the light by oil droplets and suspended solid particulates (Fig 2) in a fluid, in order to measure their respective concentrations. A nephelometry system employs a light beam and several light detectors set at different angles, commonly 0˚ for absorption and 90˚ for scattering [4, 5].

Figure 2 – Light rays scattered by a sphere Particle density is then a function of the light reflected onto the detectors from the particles. However, the signal intensity depends on the shape, colour and refractive index of the particulates (Fig 3).

Figure 3 – Scattering diagram for different particle sizes dp [6] As water contaminants include gas bubbles, solid particles and oil droplets, the technique has to be sensitive in order to differentiate them. Exclusively monitoring OIW concentration is a critical point as the oil measurement has to be independent of solid particles and gas content. Gas bubble perturbations are reduced by increasing the flow pressure, therefore in this study, their effects are neglected. Infra-red light is highly absorbed by hydrocarbon molecules (oil or petroleum are primarily composed of hydrocarbons). The selected wavelength range is in the infra-red (λ = 850 nm), in order to maximize the light absorption and therefore the sensitivity to oil droplets. The wavelength of the light being smaller than particulate circumferences, the scattering can be considered to fall in the different scattering regimes. The condition criterion, χ, is expressed as a function of the particulate radius r and the wavelength of the light,  as: Equation 1 If χ <0.1 : Rayleigh scattering regime If 0.1< χ <50 : Mie scattering regime [7] If χ > 50 : Geometric scattering regime

KTP Associates Conference 2014

78


The sample was pre-processed in order to have homogeneous and steady size distribution of oil droplets. From microscopic images of the oil droplets (Fig 4), the size distribution was noticed to be narrow and monodisperse. Measured diameters were between 3 µm to 10 µm depending on the oil type and conditioning system. Therefore, oil droplets were considered to fall in the Mie scattering regime (10<χ <35).

Figure 4 – Typical oil droplet size and shape (red Diesel) The main advantage of falling in the Mie regime is the scattering intensity being more directionally (radially) dependent compared to that in the Rayleigh regime (Fig 5). Larger particles show generally less uniform patterns and higher angular intensity variations (χ = 10 and χ = 100 compared to χ = 1).

Figure 6 – Turbidity in Nephelometric Turbidity Unit [NTU] [9]

Problematic When concentration increases, the transmission decreases due to multiple scattering. Therefore signals recorded at angles other than 0º increase with concentration. This is the ideal regime. However when the concentration reaches a threshold, the signal at low deviation angle (10º - 20º) also starts to decrease (due to high absorption and multiple scattering). Further increases of the concentration imply a decrease of the signal at higher deviation angle (30º 40º). This phenomenon on the signal intensity called “roll-over” is a challenge for high oil concentration measurements and solid particulate differentiations. Signals at wider deviation angles are less affected by oil concentration and can be used for solid discrimination. However compromises have to be made between wider angle and signal to noise ratio as wider the angle, lower the signal intensity.

Algorithm

Figure 5 – Scattering diagrams for different particle sizes [8]

Oil and solid particle concentrations are extracted from signals (T, S1, S2) acquired at three different angles (respectively and typically: 0˚, 45˚, 90˚) and processed by a complex algorithm. Prior calibration (three different oil and solid concentrations) is necessary in order to obtain quantitative data (Fig 7).

Transmissivity and Absorptivity The transmissivity is defined by: Equation 2

Where and are the intensity power of the incident light and the transmitted light respectively, is the cross section of light absorption by a single particle, l the length the light travels through the solution and N is the density (number per unit volume) of absorbing particles. The transmitted signal is inversely exponential to the particulate concentration, particle size and path length. An increase of one of these factors increases strongly the absorption of the solution and so its turbidity. At high oil concentrations (above 1,500 ppm), solutions become optically opaque (Fig 6). KTP Associates Conference 2014

Figure 7– Typical calibration points S2 is used for solid particulate discrimination as the signal is independent of oil concentration (at low level) and linearly proportional to solid particle concentration (Fig 8). S1/T is used for oil concentration determination as S1/T is linearly proportional to oil and solid particle concentrations.

79


Figure 8 – Typical calibration curves A set of four calibration curves S1/T and S2 (oil only and solid only) are used by the algorithm. S1/T and S2 signals are compared to calibration curves. Oil and solid concentration are estimated and a new set of minimal and maximal curves is computed. By recursive regression, the interval is reduced till the difference is null, then the oil and solid particulate concentrations are the converged values.

Set-up Optical configuration A typical set-up (Fig 9) for nephelometry using light source, measurement cell and optical detectors is illustrated with the following schematic: Figure 9 – Schematic set-up

KTP Associates Conference 2014

An Infra-red LED is used as a light source. For safety reasons, electronic hardware and the measuring cell are separated. Therefore, light emitted or recorded are transmitted between the measuring cell and electronic enclosure via optical fibres. To limit the angle spread from the fibre optic output, the light is focused at the centre of the cell by an 8.5 mm focal length lens. Matching 8.5 mm focal length lenses are also set in front of the photodiode detectors to focus the light on the optical fibres. The signals from the sensors are 10-bit encoded over three sensitivities (three different signal amplifications) in order to increase the overall dynamic range. The OIW flow goes through a glass tube of 6 mm inner diameter (9 mm external diameter). The cell was modified from the original product to specifically study angle deviation effects to permit the detector to be rotated around the glass cell, keeping cell to detector distance constant and allowing measurement at predefined angles (Fig 10). 5 mm diameter optical fibres are used in the original design.

80


Figure 10 – Smart 50M – modified measurement cell

Oil in water mixing Mixing oil and water is a delicate process due to oil and water quickly separating; therefore static condition was not a viable option. A specifically designed rig generated OIW flow. Oil and water had a separate pumping system. A first pump was used to set the water flow rate at 3l/min. The oil injection in this flow was performed via a second pump. The oil concentration was changed by modifying the injection flow rate. A third pump was used to mix the oil and the water and to break-down the oil. This conditioning pump allowed smaller size droplets and narrower distributions as well as increasing the mixture homogeneity. Before entering the measuring cell, the mixture passed through a static mixer to further increase the homogeneity then into a specific reservoir to degas the mixture. A control valve at the outlet of the measuring cell was used to set the flow pressure at 3 bars. The main benefits of increasing the back pressure were to avoid air bubble formation by reducing pump-induced cavitation but also to further reduce oil droplet sizes. Switching from a tank containing fresh water to a tank containing water with known solid particle concentration allowed the same oil concentration with or without solid particle contamination. Tests were performed for three solid particulate (iron oxide) concentrations: 0, 60, 120 ppm.

Methods Cell diameter, optical fibre diameter, light source to cell distance, cell to detector distance and sensor angles were the parameters under investigation. The cell diameter being one of the least easy to change without major modification of the equipment, the entire experiment was carried out with a 6 mm inner diameter cell however data from previous studies were available for three larger cell diameters. Six configurations for optical fibre diameter and distance to the cell (detector) were studied. Test conditions are represented in table 1. One parameter was changed at a time to analyse its KTP Associates Conference 2014

effect. For each configuration, an angle sweep was performed for each oil and solid particulate concentration combination. For each position from 0˚ to 135º with a 15º interval, the signal intensity was recorded. The sensor had no infrared filter fitted; therefore measurements were taken in complete darkness as ambient light influenced the reading. Configuration Ø fibre Source 1 2 3 4 5 (3) 6 7

Ø fibre Sensors 3 3 3 5 3 3 3

3 3 5 5 5 5 5

Source-cell Cell-detector distance distance 3 3 3 8 3 8 3 8 3 8 3 3 3 12

Table 1 – The six investigated configurations The oil concentration was deduced from the total flow rate and oil injection flow rate. To set it in the desired range, the injection rate was adjusted after oil concentration computation. Signal intensity values were recorded after five minutes of any oil or solid particulate concentration changes, in order to be in steady conditions and to avoid any contamination.

Analyse of previous experiments Data from previous tests were available but had never been analysed. The first step was to analyse them in order to define if one configuration was better than the other. The plot of the curves, signal intensity in function of angles for different oil concentrations with and without solid particulates, highlights the possibilities of oil concentration measurement and oil and solid particulates differentiation (Fig 11). In setup 4 (red circles), the oil concentration can be determined as signal intensities are different for the various concentrations even at high level. However the oil and solid particulate differentiation is very low. The differences between signals with and without solid particulates are small over the whole angle deviation range. This statement is also valid for setup 1 and 2. However configuration 3 shows a good oil and solid particulate discrimination (red arrows). 81


Fig 11 – Signal intensity versus angles for different oil concentrations and w/wo solid particulates The unique difference between configuration 3 and configurations 2 & 4 is the optical fibre diameters. 3 mm diameter optical fibre for the light source and 5 mm on the detector side was found to perform best. Therefore these parameters were kept the same for future work. Configuration 4 has the particularity of being relatively insensitive to solid particulate content. Considering solid particulates would introduce minor errors on the measurements. This can be beneficial if solid particulate discrimination is not needed (for low solid concentration) but could also be used for higher solid particulate concentration. Too high solid particulate concentration can highly perturb configuration 3 which is highly solid particle concentration sensitive and therefore a less sensitive configuration can wider the range. The only difference between configuration 1 and 2 is the distance between the cell and the

KTP Associates Conference 2014

detector. This parameter having an effect on the solid particulate differentiation, it was further investigated. The tests were therefore carried out for 3, 8 and 12 mm. The configurations 3 and 5 were the same in order to correlate the previous experiment data with the new set of measurements. Configuration 3 was further studied with the plot of signal intensity as a function of oil concentration (0, 60 and 120 ppm solid particulate concentration (Fig 12)). A roll-over is noticeable for angles between 45º and 75º after 1,500-2,000 ppm oil concentration. Angles above 90º can be used for S2 however the curves are relatively flat. The low gradient of the curves reduces the accuracy for oil concentration determination as the higher the gradient, the better the accuracy. However there is a trade-off between signal intensity and accuracy.

82


Figure 12 –Signal intensity versus oil concentrations for different angles and solid particulate concentration for configuration 3 The S/T curves show no roll-over at high ppm (Fig 13). Therefore S/T curves can be utilised for oil concentration determination. Angles between 30º and 75º can possibly be used, with 45º the optimal angle as the curve shows a constant gradient from 0 to 5,000 ppm. Curves for angles above 75º show very low gradients at low ppm and would result in a poor accuracy on the lower

concentration range. However the dynamic range on T signal has to cover its high variation from 0 to 5,000 ppm. At 5,000 ppm, T signal is relatively low (same as the other angles (Fig 12)) and a slight variation or noise has a major influence on the deduced oil concentration.

Figure 13 – S/T and T in function of different OIW concentrations for different angles of interest and for configuration 3

KTP Associates Conference 2014

83


Configuration 3 showed promising results. Solid particulates discrimination being a major issue for accurate measurements; further experiments were performed in order to improve it. Configuration 3, also called setup 5, was tested under two different solid particulate concentrations 60 ppm and 120 ppm.

From comparison with previous experiments, the diameter reduction of the cell (i.e. measurement volume) from 13 mm to 6 mm diameter limits the roll-over effect. The signals still flatten-out but at higher oil concentration. Reducing the cell diameter is a solution to minimize the high absorption, however decreasing the diameter furthermore implies more challenges for the miniaturisation of the cell’s cleaning system.

Results and analysis The signal intensity as a function of angle for different oil concentrations with and without solid particulates highlight the possibilities of oil concentration measurement, as well as oil and solid particulates differentiation for the different configurations (Fig 14). Setup 6 demonstrates very low solid discrimination; the signal intensities with and without solid particulates are similar. Setup 6 also shows low oil concentration determination as the signal intensities are close to each other’s for high oil concentration. Setup 7 showed better solid particulate discrimination than setup 5 at same solid particulate concentration and good intensity differences between oil concentrations even at high levels. Therefore setup 7 was preferred to setup 5 and setup 6.

Figure 14a – signal intensity versus angles for different oil concentration w/wo solid particulate

KTP Associates Conference 2014

84


Figure 14b – signal intensity versus angles for different oil concentration w/wo solid particulate The further the sensors are from the cell, the better the solid discrimination. On the 3, 8, 12 mm distances tested, 12 mm gives the best results by a higher solid discrimination with a minor signal intensity drawback. This is considered due to the fact that the collection angle is narrowed; the system becomes more sensitive to intensity radial variation (Fig 3 & 4)

and therefore particulates.

to

the

presence

of

solid

Signal intensity as a function of oil concentrations for different angles and solid particulate concentrations for configuration 7 was plotted (Fig 15) in order to further investigate this set-up.

Figure 15 – Signal intensity versus oil concentrations for different angles and solid particulate concentration for configuration 7

KTP Associates Conference 2014

85


60 ppm of solid particulate into 5,000 ppm oil concentration is, proportionally, a small amount (1.2%). However a signal difference is still noticeable. The 45° angle configuration offers a constant signal difference between oil and solid particulate. A roll-over for angles between 45° to 70° is noticeable, nonetheless the use of S/T corrects this issue (Fig 16). Angles between 90° and 135° can be used for S2 with 90º being optimal for higher accuracy on the lower concentration range (5,000 ppm being the target). However for higher oil concentrations such as 9,000 ppm, 120º or 135º could be used as the curves for these angles still do not roll-over. This illustrates that oil concentration measurement by nephelometry at 9,000 ppm and above is possible. 30° is the optimal angle for oil concentrations between 0 to 5,000 ppm. The gradient of the curve at 45º is relatively constant from 0 to 9,000 ppm oil concentration and offers the best accuracy for S1/T over this full range.

of solid particulates in 9,000 ppm of oil represents 0.66%). A second option is to use S2/T instead of S2 as the gradient of the curves at deviation angles above 90º are relatively low at high oil concentration (Fig 16). Gradients of S2/T curves for high concentration are stiff at high oil concentration (Fig 17) but relatively flat at low oil concentration and inversely for S2. A combination of S2/T and S2 could be used depending on the oil concentration range conditioned by a threshold on T. This option would imply major modification to the computation algorithm. Sensitivity of the instrument was estimated, considering the curve at 45º, a 1% measurement fluctuation gives a variation of 115 ppm at 5,000 ppm i.e. 2.3% and 185 ppm at 9,000 ppm i.e. 2.05% variation on the oil concentration reading. The gradient of the curve is sufficiently high to have a good accuracy on the oil measurement on the specified range, even at high concentration.

60 ppm solid particulate concentration has a negligible effect on the measurement. Further tests with higher solid particulate concentration would be required to investigate the effect of solid particles at high oil concentration (60 ppm

Figure 16 – S/T and T in function of different OIW concentrations at different angles of interest for configuration 7

KTP Associates Conference 2014

86


Conclusion

thank Keith Colgate for his technical support and critical analysis, and Andrew Wickenden for his help and advice on the experimental rig.

References The 5,000 oil ppm maximum range was validated to be reachable; a suitable configuration was defined without major modification of the original set-up. The further away the sensors are from the cell, the better the solid discrimination for a minor signal intensity drawback. The collection angle being reduced, the system is more sensitive to radial intensity variation i.e. solid particulates. A distance of 12 mm was found to be the most suitable. 45º and 90º angles can be used for oil and solid particulates concentration determination; this implying no need for modification to the original design. When S is rolling-over or flattens out for high oil concentration, S/T has a strong gradient and can still be employed to return a unique value. The principle applied to S1 could also be applied to S2, the study shows the possibility to use S2/T instead of S2 for high oil concentrations. The limitation is to record T signal with a correct dynamic range. The study highlights that higher oil concentrations above 9,000 ppm can still be measured by nephelometry. Further experimental work would be required to define the limit of the technique. However, Solid particulates discrimination becomes more challenging for high oil concentrations, especially at a proportionally low solid particulate concentration. Configuration 4 shows low dependency to solid particulates; this configuration could be a viable option if no solid discrimination is needed.

Acknowledgements

i

1. Organization, I.I.M., Revised guidelines and specifications for pollution prevention equipment for machinery space bilges of ships. Resolution MEPC.107(49), 2003. Annexe 13: p. 1-34. 2. Organization, I.I.M., Revised guidelines and specifications for oil discharge monitoring and control systems for oil tankers. Resolution MEPC.108(49), 2003. Annexe 14: p. 1-42. 3. Rivertrace, http://www.rivertrace.com/images/products/P DFs/Smart%2050m_Boiler_CondensateCooli ng_Water%20Monitor.pdf. 4. He, L.M., et al., Rapid in situ determination of total oil concentration in water using ultraviolet fluorescence and light scattering coupled with artificial neural networks. Analytica Chimica Acta, 2003. 478(2): p. 245258. 5. Yang, L. and N. Bowler, Traveling Waves on Three-Dimensional Periodic Arrays of Two Different Magnetodielectric Spheres Arbitrarily Arranged on a Simple Tetragonal Lattice. Antennas and Propagation, IEEE Transactions on, 2012. 60(6): p. 2727-2739. 6. Jensen, K.D., Flow measurements. Journal of the Brazilian Society of Mechanical Sciences and Engineering, 2004. 26: p. 400419. 7. Mie, G., Beiträge zur Optik trüber Medien, speziell kolloidaler Metallösungen. Leipzig, Ann. Phys, 1908. 330: p. 377–445. 8. Oregon, U.o., Mie Scattering diagrams. http://zebu.uoregon.edu/2004/ph311/lec16.ht ml, 2004. 9. Optek, Turbidity Measurement Units. http://www.optek.com/Turbidity_Measuremen t_Units.asp.

The project was part of a larger KTP project funded by UK government and Rivertrace. We

KTP Associates Conference 2014

87


KTP Associates Conference 2014

88


Integrating climate research and extreme event simulation into the assessment of weather-related hazards A. Cobb(1), P-L. Vidale(1), H. Galy(2)

(1)

Department of Meteorology, University of Reading (2)

Willis Analytics, Willis Group a.cobb@reading.ac.uk

Abstract

Willis is a global insurance and reinsurance broker, with operations in over 100 countries, handling risk management and reinsurance across a wide range of service areas and industries. Supporting these activities, in view of the large insured losses associated with natural hazards, such as tropical cyclones, catastrophe models are vital tools used to assist in pricing insurance products in susceptible areas.

Over the past ten years, the University of Reading have developed world-leading highresolution global climate model capabilities from which tropical cyclone tracks have been extracted, and have also applied this method on a number of reanalysis products. These data can be used alongside catastrophe models to assess the uncertainty within the loss estimates produced by catastrophe models, with the aim of improving pricing and reducing insured loss.

In partnership, Willis and the University of Reading are developing a unique climate risk ‘laboratory’. A focal point of the KTP is the integration into the laboratory of dynamically simulated extreme events from global climate models, as well as reanalysis data, and provision of the expert scientific knowledge to assist Willis and its clients in making appropriate use of results from academic research.

The partnership provides expert scientific guidance, focussed on identifying current and future climate extreme risk, advising Willis’ specialists and clients on the interpretation and application of climate research within the business planning process, creating distinct market advantage. This partnership aims to positively impact the competitiveness and productivity of the company partner through the transfer and embedding of knowledge, technology and skills.

KTP Associates Conference 2014

89


Introduction

Background

extreme loss causing events, such as tropical cyclones, as well as the governing mechanisms in the climate system that regulate their occurrence. This is undertaken through mediation of science and statistical knowledge through quantitative models, which Willis both licence and develop. Willis Analytics works on behalf of all of Willis insurance and reinsurance clients, providing analytical services including operation and interpretation of licenced models, development of in house modelling, and increasingly comparison of multiple model outputs, which underpin all transactional business.

University of Reading

Need for collaboration

Over the past ten years, the University of Reading have developed world-leading high-resolution global climate model (GCM) capabilities. GCMs are physically based models that produce a vast amount of climate data across the globe and can now be run at resolutions comparable to those which are used to produce weather forecasts. At these high resolutions, GCMs are able to simulate tropical cyclones as well as global modes of variability, such as El Niño. The University of Reading also has access to a well-known and wellrespected tracking algorithm, TRACK (Hodges, 1994). This algorithm can be run on reanalysis and GCM data to identify and extract tropical cyclone tracks, along with associated data such as wind speed and central pressure.

The ability to incorporate scientific expertise into the model process is critical to its success and cannot be done in isolation by those working within the Willis team. Willis needs to include climate science expertise particularly in relation to high-resolution weather resolving models. A focal point of the partnership is the integration of dynamically simulated extreme events into catastrophe models and other decision-making tools, and provision of expert scientific knowledge to assist Willis and its clients in appropriate use of scientific results. The partnership provides expert scientific guidance, focussed on identifying current and future climate extreme risk, advising Willis’ specialists and clients on the interpretation and application of climate research within the business planning process, creating distinct market advantage.

Aim: To integrate climate research and extreme event simulation into the assessment of weather-related hazards, delivering improved climate catastrophe risk pricing, product development and market expansion

Willis Group Willis handles risk management and reinsurance across a wide range of service areas and industries. Key to Willis’ competitiveness is to be able to understand and explain the occurrence of

KTP Associates Conference 2014

90


Project motivation Extreme weather events, including tropical cyclones can lead to huge socio-economic losses. The insurance industry, and risk management in general, is particularly sensitive to weather-related catastrophic events, and the impact of climate variability and climate change on extreme weather events is of great concern. For instance, Hurricane Katrina, which struck in 2005, resulted in over $80 billion of damage (King, 2013). In the same hurricane season, however, a total of 14 major hurricanes were observed, which was considered exceptional. The question about the exceptionality of 2005 remains open. The large variability seen in the number of storms in the Atlantic over a 60year period is shown in figure 1.

Figure 2 highlights the year-to-year variability in losses from catastrophic events. In most years, weather-related (meteorological) events are the greatest contribution to the annual loss. The trend has been for increasing losses with time, but with a huge amount of variability. There is even larger uncertainty about the past, for which we have no data on extreme events, and the future, which is influenced by anthropogenic climate change. The latest IPCC report (2013) still reflects major uncertainty about expected changes.

Figure 1. Atlantic storm counts in observations (Best Track)

Global Climate Model (GCM)

Extreme weather events (EWEs) are responsible for over 75% of insured losses, and the rate of total annual loss is apparently increasing. Figure 2. Insured losses against time (Swiss Re)

(1)

Methods and tools

GCMs are computer models that output data across the whole globe. They are based on real physical equations rather than statistics, and are therefore able to provide alternative realities. These models can simulate many years of data, and using tracking algorithms, tropical cyclone storm tracks can be extracted. As these models represent the physical environment, modes of variability such as El Niño are also reproduced, and so subsamples based on these modes of variability can be selected. Figure 3 shows the number of storms in the Atlantic over 150 years produced by HiGEM, a highresolution GCM. This model reproduces well the long-term variability that has been seen in observations (figure 1). Figure 3. Atlantic storm counts simulated in the HiGEM high resolution GCM (150 years)

KTP Associates Conference 2014

91


Reanalysis data Retrospective-analyses (or reanalyses) integrate observations with numerical models to produce a temporally and spatially consistent synthesis of variables not easily observed. The breadth of variables, as well as observational influence, makes reanalyses ideal for investigating climate variability. Reanalyses contain data at grid points around the globe, which makes it easily comparable to model data, and is a close representation to the ‘truth’ due to the constraint on the model evolution by the observations. Within the reanalysis tracks, sub-samples of those within different climate states can be selected, such as with the GCM tracks.

TRACK The tracking algorithm, TRACK (Hodges, 1994) has been used on both the GCM and reanalysis data to extract tropical cyclone tracks. These are tracks based on the physical state of the atmospheric and oceanic environment and show the lifetime of the storm, i.e. location, timing and intensity.

Catastrophe models Catastrophe models are risk assessment tools used to estimate the possible financial loss due to a particular hazard. These models provide loss estimates by overlaying the properties at risk with the potential natural hazard. A probabilistic approach to catastrophe loss analysis is the most appropriate way to handle the abundant sources of uncertainty inherent in all natural hazard related phenomena.

statistical model, relying on historical data to provide probabilistic information about the future likelihood and impact of catastrophic events, including information about phenomena frequency, location and severity. Due to the relative infrequency of catastrophic events, there is a scarcity of historic data. In order to create a larger catalogue of simulated events, statistics are applied to produce many more simulated events using these historic tracks. Figure 4. The modules within a catastrophe model

The exposure or inventory is the portfolio of properties at risk. These two modules enable the calculation of the vulnerability, for example ‘slight’ or ‘moderate’ damage (Grossi and Kunreuther, 2005). From this measure of vulnerability, the loss is calculated, for example the cost to repair or replace a structure, as well as indirect costs such as business interruption. The output of loss estimates can be as a hazard map or Exceedance Probability (EP) curve (figure 5). This is a graphical representation of the probability that a certain level of loss will be surpassed in a given time period. Figure 5. A mean EP (loss) curve

(2)

The four basic components of a catastrophe model are hazard, vulnerability, exposure and loss (figure 4). For example, a tropical cyclone hazard is characterised by its projected path and wind speed as well as frequency of these events. The hazard component of a catastrophe model is traditionally a KTP Associates Conference 2014

92


Strategy and experimental design The overarching strategy of this project is to inject information about climate variability and change into industry. In partnership, we are building methods and tools in order to explore this complex climate information and enable two-way flow of information. A flagship product development project and focal part of the KTP is the new Tropical Cyclone Laboratory, which is being designed to be able to directly incorporate event data generated by GCMs, re-analyses alongside standard observations to allow comparison to benchmark risk estimates and third party catastrophe models. In the Tropical Cyclone Laboratory, we can share data and information via a neutral container, which facilitates easy communication and the ability to quickly share results and conduct discussions. The data that will be deposited in the laboratory will be of a standard format, allowing easy manipulation across a variety of datasets. Along with data there will be algorithms from both the University and Willis that can be applied to this data. By having a central, shared location, experiments can be run by manipulating this data. This provides the ability to perform idealised sensitivity studies on entire chains of decision-making, for example assessing sensitivity of changes within the different modules and the effect on the loss result. The data provided by the University of Reading is in the form of tropical cyclone tracks from the different reanalyses and GCMs. These tracks complement the historic dataset and vastly increase the sample size. Therefore, when running experiments, especially when using a subsample of tracks, more robust analysis can be made, for example when assessing the loss sensitivity to uncertainty in the hazard. With this augmented sample size,

KTP Associates Conference 2014

we can investigate interannual variability, such as El Niño / La Niña, and can come to robust conclusions using this large sample of historic, reanalysis and GCM tracks. A key output is the visualisation of results, whether it is a loss map or EP curve as a final output, or plots of storm tracks that have been sub-sampled and run through the remainder of the model. These maps or graphs can be easily examined and interpreted by both Willis and the University of Reading. This laboratory is a useful tool enables communication between University of Reading and Willis, access to data, metadata, documentation, along with results experiments run by both parties.

that the with and from

Discussion The traditional stochastic approach used in catastrophe models by industry assumes that the climate is a stationary system. In fact, the climate system is nonstationary, displaying variability on multiple scales, which is shown to have an impact of extreme weather activity, such as tropical cyclones. Tropical cyclone activity is quite anomalous, for instance, in years of El Niño, or in years of La Niña. We are beginning to investigate ways in which extreme event simulations could be used as a basis for the hazard component of a catastrophe model instead of relying primarily on historical data. Complementing with GCMs, which can provide “synthetic” storm data, offers the perspective of long-term climate variability and the ability to experiment with different future scenarios. This approach must be combined with rigorous assessment of GCM skill. This project addresses fundamental business issues affecting risk management decisions of global

93


insurers/reinsurers who are increasingly reliant on quantitative models to form risk opinion in the face of uncertainty. Models based on limited and inconsistent historical data cannot fully address these questions. While research has shown that global-scale climate patterns drive EWE patterns, this is not captured in current catastrophe models. This KTP facilitates communication between industry and academia, sharing information and prompting discussion on experimental results. With the scientific expertise supporting and guiding decisions in industry, it is hoped that Willis will gain advantage in business.

physics, and so there are associated with this approach too.

errors

In this project the hazard module is being explored, with the addition of data from GCMs and reanalysis, and also by running various experiments within the module. However, the other modules are very important as the impact of tropical cyclones is driven by their interactions with the built environment, systems, networks and populations. Our understanding of exposure and vulnerability is insufficient in most developed and developing regions so that enhancing the quality of exposure data is a major priority for disaster risk reduction in communities at local and global scales.

Limitations of the approach In order to trust the data extracted from the GCM, it is important to assess their skill. All models have errors, but when these are known they can be taken into account when conducting analyses. The reanalysis datasets are created by constraining a GCM with observations, and so can be seen as a representation of the ‘truth’. However, any errors in the GCM may still be apparent in the reanalysis. The observations that go into the reanalysis and are also used on their own have errors too, which tend to increase further back in time. This is due to better observing systems constantly being developed and deployed, such as satellites. However, all of these have a physical basis and can be thought of as different representations of reality. The statistical approach of generating a large sample of tracks traditionally used in catastrophe modelling is not based on

KTP Associates Conference 2014

Future research and applications If successful this method can be applied to other perils including flood, earthquake and drought as well as integrating other weather models and data sources within a coherent platform.

94


References (1)

http://media.swissre.com/documents/sigma1_2014_en.pdf

(2)

Colorado.edu

Grossi and Kunreuther (2005): Catastrophe modelling: A new approach to managing risk. Springer

K. I. Hodges (1994): A General Method for Tracking Analysis and its Application to Meteorological Data. Mon. Weather Rev., V122, 2573-2586

King (2013): Financing natural catastrophe exposure: Issues and options for improving risk transfer markets. Congressional Research Service

KTP Associates Conference 2014

95


KTP Associates Conference 2014

96


The effects of material variations in high value manufacturing industries Akbar Jamshidi

a,12

b

a

, Jon Cave , Paul G. Maropoulos , Martin P. Ansell

a

a

b

Department of Mechanical Engineering, University of Bath, BA2 7AY, UK Helander Precision Eng Ltd, Kennet Close, Tewkesbury Business Park, Tewkesbury, GL20 8HF, UK

Abstract It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products. Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product. Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper. Keywords: Superalloys, Gauge R&R, Process capability, Measurement, Metrology

12 Corresponding author. Tel.: +441684293003, E-mail address: A.Jamshidi@bath.ac.uk KTP Associates Conference 2014

97


Introduction The Laboratory of Integrated Metrology Applications (LIMA) at the University of Bath conducts research in metrology and provides independent R&D support for industry. Established in 2009 LIMA is an independent body for collaborative research and development of innovative metrology enabled applications. LIMA incorporates fundamental research with industrial applications to develop new technologies which benefit UK Manufacturing. Helander Precision Engineering provides manufacturing services, specialising in high value manufacturing parts with various materials for oil, gas, aerospace and lately nuclear industries. Parts are produced to high specifications and tolerances from difficult to machine materials selected for their stability in oil and gas drilling environments. Since 2007 Helander has been a member of the Calder Group. Calder Group is a pan-European engineering group, active in specialist engineering, lead engineering and the manufacture and distribution of lead sheet. It mainly supplies parts to the aerospace, construction, nuclear power, oil & gas industries. A KTP project was introduced in the company to enhance the existing process capability for long-term contracts with focus on excellent service and cost effectiveness. The goal is to develop a process and product verification system that complies with the requirements of key customers. There are more than 35 SMEs working in the Helander supply chain. Products received and sent to these companies need to be verified, based on different standard levels required and targeted by customers. This project aims to introduce a robust verification process that is aligned with Helander’s capabilities. It is vital to ensure Helander’s confidence level in all of the different stages of operations. Lack of a systematic verification in Helander’s supply chain increases the variations in manufacturing components from batch to batch. Studies on different material and heat treatment standards has identified one of the causes of these variations. It has been observed that many parts shipped to customers, after multiple verifications and inspections throughout manufacturing, still return as rejects mainly due to their dimensional non-conformities. Lack of knowledge in conventional manufacturing companies causes a tremendous amount of rework on products, resulting in increases in cost, lead time, and capacity problems. Even having the best MRP system in place and having advanced measurement systems would not be of any help if companies are failing to assess their own capabilities in verifications and manufacturing. In the first month of the KTP project a Gauge R&R (Repeatability and Reproducibility) study was conducted. Interpretation of the results shows how unreliably the parts are being measured. Results of the study are briefly shown in this paper. In order to have a higher confidence level in the measurement of sensitive parts more accurate measuring tools should be used. Gauge R&R is an ideal tool for examining assessment programs that require subjective interpretation. Gauge R&R assists companies in understanding their own processes and validating the effectiveness of measured figures collected through their measurement systems. The use of Gauge R&R brings more confidence to engineers and prevents rework caused by manufacturing out of specification parts.

Background Measurement system analysis provides decision makers with a useful set of tools for understanding which parts of a system causes most variation [1]. These variations can be linked directly to the true dimension of the part, or inspector, or the measurement device. Gauge R&R helps to identify which part of the measurement system makes the greatest contribution to error. Gauge R&R also checks whether the performance of the measurement system is established over time and is not going to change within a range of parts. It is necessary to evaluate variations in a measurement system before investing in measuring devices. Accuracy and precision are achieved at the same time via Gauge R&R studies. In recent years technology has moved forward in aerospace, nuclear, oil and gas industrial at a great pace. However, limitations in material operating at high temperature with good strength and corrosion resistance has always been an issue since the emergence of the jet engine for aerospace and down-hole equipment for oil industries. The use of superalloys is

KTP Associates Conference 2014

98


very common in such industries. The most familiar superalloy for the oil and gas industry is the Inconel alloy 718 (UNS N07718/W.Nr. 2.4668). Some characteristics of Nickel based Inconel 718 are shown in Table 1 [2]. Table 1 – Selected chemical and mechanical properties of Inconel 718 Designation Condition US name Density

Ni-­‐Cr alloy: Inconel 718, STA Solution treated and aged ASTM Grade: N07718; AMS: 5662, 5663, 5664, 5832, 5914, 5962, 5596, 5597, 5950 8.18e3 – 8.26e3 kg/m^3

Composition %

Al 0.5

Young’s modulus Yield Strength Tensile Strength Hardness-­‐Brinell

198 – 208 GPa 1e3 – 1.11e3 MPa 1.17e3-­‐ 1.32e3 MPa 379 – 471 MPa

Thermal Expansion Coefficient

12.8 – 13.4 µstrain/°C

Cr 19

Fe 18.5

Mo 3

Nb 5.1

Ni 52.5

Ti 0.9

Inconel 718 can be wrought or cast depending on the application. Cast superalloys may vary in grain size from piece to piece [3]. Wrought superalloys are originally made by casting, and after thermal and forming processes they reach their final stage. In general, wrought superalloys are more uniform than cast superalloys. Most wrought nickel- superalloys are used up to about 1200 to 1300°F (649 to 704°C). In most cases when temperatures exceed 1000°F (540°C), steel and titanium alloys undergo corrosive degradation. Compared with other commercially available metals, superalloys are the most suitable metals for high temperature applications. In contrast with casting superalloys, wrought superalloys have a better ductility due to their greater uniformity and similar grain sizes. Machining superalloys has always been a challenge due to the hard nature of the metal itself. Superalloys tend to harden during the machining process. The superalloys’ low thermal conductivity builds up heat during machining operations resulting in high cutting temperatures. The high strength and toughness in addition to work hardening characteristics of superalloys demand a high cutting force [4]. There are several research papers on machining superalloys including: Use of “High Speed Turning” for Inconel 718 [5], Study on Cutting Forces on Inconel 718 with a Round Cutting Edge [6], Hybrid Machining of Inconel 718 [7], Dry and High Speed Machining of Inconel 718 Alloy [8] and A Review of Key Improvements in the Machining of Aerospace Superalloys [9]. However, there is still considerable work to be done to understand the nature of superalloys and achieve a robust method for their effective machining.

Gauge R&R A Gauge R&R study has been conducted using different measurement tools. It has been proved that certain measurement devices are not capable of measuring tight tolerances. This might seem obvious in many cases, however when the tolerances are at the level of microns it is more of a challenge to decide what measurement tool should be used. For example a customer returned parts which had a shaft with a required diameter of between 64.970 mm to 64.985 mm. The part was classified oversize by the customer. This gives 15µm tolerance for the diameter of the shaft. This part was measured with a digital micrometer before delivery to the customer. When the part came back observations and measurements showed that it was marginally oversize, see Figure 17. The highlighted area shows that the acceptable range (64.970 mm to 64.985 mm) of dimensions. However, almost all of the readings were oversized while the same measurement device was used. The study shows how parts can be inspected several times and still be outside the tolerance. Also the variations of readings demonstrate error in the measurement system. Although the similarity between two operators’ readings suggests that readings are precise, they are clearly inaccurate based on the acceptable range by almost 15 µm.

KTP Associates Conference 2014

99


The Gauge R&R study was performed on the outer diameter of a similar component. The results (Table 2) showed how using a different combination of measurement tool and operator can have an effect on readings for the same components. The measurement gauges used for this study were manual micrometer and comparator. The result of the Gauge R&R study revealed that a micrometer is not suitable for such a tight tolerance (15 µm). In addition to that, the operator may play a part in the malfunction of a measurement method. The latter can be solved by a skills review for operators followed by training.

Figure 17 – Measured figures against specified allowance tolerance

In order to verify the cause of the error for parts returned by customers the thermal expansion of Inconel 718 alloy was examined. The thermal coefficient of expansion of Inconel 718 is 12.8-13.4 µm strain/°C.

KTP Associates Conference 2014

100


13

Table 2 – Results generated from study in Minitab [14] Results for Comparator Source Total Gage R&R Repeatability Reproducibility Operator Operator*Part Part-To-Part Total Variation

StdDev (SD) 0.0019027 0.0016931 0.0008682 0.0000000 0.0008682 0.0190694 0.0191641

Study Var (6 * SD) 0.011416 0.010159 0.005209 0.000000 0.005209 0.114416 0.114985

%Study Var (%SV) 9.93 8.83 4.53 0.00 4.53 99.51 100.00

Number of Distinct Categories = 14 ----------------------------------------------------Results for Manual Micrometer Study Var %Study Var Source StdDev (SD) (6 * SD) (%SV) Total Gage R&R 0.0050404 0.030242 26.75 Repeatability 0.0047539 0.028524 25.23 Reproducibility 0.0016750 0.010050 8.89 Operator 0.0000000 0.000000 0.00 Operator*Part 0.0016750 0.010050 8.89 Part-To-Part 0.0181594 0.108957 96.36 Total Variation 0.0188460 0.113076 100.00 Number of Distinct Categories = 5

This means that Inconel 718 will on average expand by 13.1 µm per metre. Putting this factor in the calculation based on the diameter of the part 13.1 µm * 65 mm/1000 mm=0.8515 µm shows an approximate 0.9 µm expansion in 65 mm per degree which must be taken into account during measurement.

Material Variations Nickel based superalloys are widely used in the oil and gas industry. Due to their properties they are considered as difficult-to-cut alloys as their machining process consumes high amounts of energy, tools and time. The use of high speed machining is advised for such materials. The main benefits of high speed machining are lower cutting forces, reduction in lead time, eliminating the heat with chip removal ensuring less distortion and increasing part precision [10], and faster material removal rates, all resulting in lower energy consumption. The outcome of these benefits can be seen as higher productivity and throughput [11]. Research on machining techniques [10], different cutting speeds/angles, various cutting tools (carbide and ceramics both coated and uncoated), coolant materials (air and liquid) were carried out on machining superalloys. Needless to say, each method has its pros and cons and there is no fully effective approach available. Conversely, there is less knowledge about these methods available in industry. Most manufacturing companies are still working with outdated methods developed by their engineers internally and/or at best via consultation with their tool suppliers. The variations between batch to batch materials (Inconel 718 alloy) are evident from machining operations. Machine operators claim that some batches with almost the same characteristics are softer and easier to machine compared with others which results in less tool wear as a result of variations in the composition of materials, heat treatment and grain size/structure. As part of this study a sample of Inconel metal was taken to a materials preparation lab for microstructure analysis. The process was to cut specimens at different orientations from wrought Inconel 718 in resin and polish them for microscopy analysis. The

13

If the Total Gage R&R contribution in the %Study Var column (% Tolerance, %Process) is: • Less than 10% - the measurement system is acceptable. • Between 10% and 30% - the measurement system is acceptable depending on the application, the cost of the measuring device, cost of repair, or other factors. • Greater than 30% - the measurement system is unacceptable and should be improved.

KTP Associates Conference 2014

101


next stage was to etch the specimens in order to reveal the grain structure. The aim of the work is to compare batch to batch materials from suppliers and discover the effects of material variations on machining. The method for grinding and polishing Inconel 718 is shown in Table 3. Table 3 -­‐ Grinding and polishing method used for Inconel 718 Stage

surface

Lubricant

Particle Size

Head Rotation

Speed, rpm

lbs/ Sample

Planer

SiC

Water

P180

Camp

150

5

Planer

SiC

Water

P500

Camp

150

5

Planer

SiC

P1200

Camp

150

5

Final Polish

Chemomet

Water None, but wet with water, then remove water before use

Mastermet 0.05um silica

Contras

120

6

Time mins 1 minute repeat until plane 1 repeat if necessary 1 4, repeat if necessary

14

For the purpose of etching, Kalling’s No. 2 etchant was used, the results of which are shown in Figure 18 by optical microscopy. The left image shows an area close to the edge of the Inconel 718 bar and the right image shows an area close to the core section of the bar. The left image reveals fewer small grains due to the heat generated during cutting/machining. This microstructural change introduced by the machining process causes dimensional inaccuracies as a function of time [8]. This explains why the customer returned parts. Large grain sizes help to prevent creep, while smaller grain sizes enhance strength and fatigue resistance [12].

Figure 18 -­‐ Pictures from Inverted Reflected Light Microscope -­‐ grain size ASTM 5 (50x)

Material Specifications Several standards for material specification are used commercially. One of the major ones is Aerospace Material Specifications (AMS) that specifies both the engineering materials and the fabrication process. The use of different standards is based on product specifications 15 determined by the application. AMS 5662 and AMS 5663 are the most common standards, also used for this case study project. AMS 5663 focuses on hardness and tensile properties at room temperature. The only difference between the two standards is that AMS 5662 requires solution heat treatment (hardness about 20-25 HRC) whereas; AMS 5663 requires heat treatment plus precipitation hardening (hardness about 36-44 HRC) [13]. The study of different standards suggests that Inconel 718 can be heat treated to a lower hardness prior to the machining process. It is wiser to use AMS 5662 with less hardness which facilitates lower machining force by comparison and increases the tool life, and then heat treat the components to AMS 5663 level.

14

Kalling No.2: 2.5g CuCl2, 50ml Concentrated HCl & 50 ml Ethanol ASM 5663 specification (solution heat treatment 1 hour in Ar at 950°C then air cooled, followed by precipitation heat treatment at 718°C for 8 hours, furnace cooled at 38°C/min. to 620°C held for 8 hours and finally air cooled.) 15

KTP Associates Conference 2014

102


Statistical Process Control (SPC) Value stream mapping (VSM) is used as a tool to project the overall image of the product manufacturing route for reassessing best practice. A quick examination of part routes shows that parts go to the inspection room several times during manufacture. This approach of course increases the lead time, and limits the company’s capacity. Furthermore, verifying parts with this method increases uncertainty in the dimensions of measured parts. On-line part verifications using measurement probes are under review. Statistical process control (SPC) is used for validating the process in mass production. It can potentially help to decrease lead time and reduce scrap rate. It allows quality departments to have a better control of on-machine verification without human involvement. Also, exporting data to SPC software allows one to one part verification reporting that can be used to inform the customer if needed. This enhances the confidence level in production processes leading to more opportunity for process improvements in the future.

Conclusions The paper has presented and discussed the operational use of Gauge R&R and the benefits on commercial products. The authors have attempted to close the gap between academia and industry by using knowledge base techniques and providing feedback to engineers at the case study company. Examination of the grain structure with optical microscopy after machining reveals microstructural changes. Further research on the influence of machining on measurement inaccuracies is planned as further work.

Acknowledgements This work was supported by the Technology Strategy Board, Knowledge Transfer Partnership (Grant 9253). The authors wish to thank Mark Chappell from LIMA-BTC, the technical support provided by Clare Ball in the Department of Mechanical Engineering, University of Bath and Martin Speight from Helander Precision Engineering for their support.

KTP Associates Conference 2014

103


References [1] Murphy S.A., Moeller S.E., Page J.R., Cerqua J., and Boarman M. (2009) “Leveraging Measurement System Analysis (MSA) to Improve Library Assessment: The Attribute Gage R&R”, College & Research Libraries, Vol. 70, No. 6, pp. 568-577 [2] CES EduPack 2013 Version 12.2.13 [3] Donachie M.J. and Donachie S.J., (2002) “Superalloys - a Technical Guide”, Second Edition, ASM International [4] Liao Y.S., Lin H.M. and Wang J.H., (2008) “Behaviours of End Milling Inconel 718 Superalloys by Cemented Carbide Tools”, Journal of Material Processing Technology, Vol. 20I, pp. 460-465 [5] Pawade R.S., Suhas S.J. and Brahmanhkar P.K., (2008) “Effect of Machining Parameters and Cutting Edge Geometry on Surface Integrity of High Speed Turned Inconel 718”, International Journal of Machine Tools and Manufacture, Vol. 48, pp. 15-28 [6] Fang N. and Wu Q., (2009), “A Comparative Study of the Cutting Forces in High Speed Machining of Ti-6Al-4V and Inconel 718 with a Round Cutting Edge Tool”, Journal of Materials Processing Technology, Vol. 209, pp. 4385-4398 [7] Wang Z.Y., Rajurkar K.P., Fan J., Lei S., Shin Y.C., and Petrescu G., (2003), “Hybrid Machining of Inconel 718”, International Journal of Machine Tools and Manufacture, Vol. 43, pp. 1391-1396 [8] Dudzinski D., Devillez A., Moufki A., Larrouquère D., Zerrouki V., Vigneau J., (2004), “A Review of Developments Towards Dry And High Speed Machining of Inconel 718 Alloy”, International Journal of Machine Tools and Manufacture, Vol. 44, pp. 439-456 [9] Ezugwu E.O., (2005), “Key Improvements in the Machining of Difficult to Cut Aerospace Superalloys”, International Journal of Machine Tools and Manufacture, Vol.45, pp. 1353-1367 [10] Thakur, D.G., Ramamoorthy, B. and Vijayaraghavan, L., (2008), “Study in the Machinability Characteristics of Superalloy Inconel 718 during High Speed Turning”, Journal of Material and Design, Vol. 30, pp.1718-1725 [11] Schulz, H. and Moriwaki, T., (1992), “High Speed Machining”, CIRP Annals Manufacturing Technology, Vol. 41, Iss. 2, pp. 637–643 [12] Campbell, F.C., (2006) “Manufacturing Technology for Aerospace Structural Materials”, Elsevier Ltd [13] Wang, W., (2011) “Reverse Engineering: Technology of Reinvention”, Taylor & Francis Group [14] Cheshire, A., (2011) “How to Interpret Gage R&R Output - Part 2”, The Minitab Blog (online resource), Available from: http://blog.minitab.com/ [09-May-2014]

KTP Associates Conference 2014

104


s

Poster Presentations

KTP Associates Conference 2014


R&D of Leak Repair Additives: for automotive cooling systems 4. Benefits of KTP

6. Test Results Summary

KTP brings together each party (Kalimex Ltd, University of Brighton and Associate) which mutually benefits each other with their unique skills/abilities and expertise, as shown in the diagram below.

Graph 1: Leak test data from ASTM rig for 0.025in (0.635mm) diameter hole and 0.010in (0.254mm) wide slot test plates

performance can be

• Use test data to show K-Seal® adherence to ASTM standards and is fit for purpose.

2. What is K-Seal®? Kalimex Ltd is a supplier of automotive repair products and their main brands are K-Seal® (see Figure 1) and K-Seal® HD (see Figure 2) which are the automotive market’s premium cooling system leak repair additive. K-Seal® can be added to the cooling system of all types of water-cooled engines to permanently seal leaks in; engine blocks, cylinder heads, head gaskets, freeze plugs, radiators, heater cores (matrix) and water pump casings.

Academic Institute (Knowledge Based Partner) - University of Brighton • Enable the university to further develop its business-relevant teaching materials and strengthen its connections with the industry. • Provides a source of research projects for undergraduate (e.g. Final year projects) and postgraduate students (e.g. MSc projects or PhD ). • Opportunity to publish research papers/scientific journals based on the research carried out during the KTP project.

K-Seal® is compatible with all types of engine coolant concentrates (antifreezes) such as ethylene glycol and propylene glycol based antifreezes. K-Seal® can be added to any engine coolant (antifreeze and water solution) irrespective of the concentration of antifreeze, or just water. Therefore there is no need to drain or flush the cooling system before or after treatment.

KTP

Associate - Nick Applin • Gain real world research experience (including laboratory practical skills), in a commercial context. • Further develop people and project management skills through the ownership of the KTP project. •Advance technical writing skills, via producing several completely new technical reports for Kalimex archive. •Opportunity of further training and qualifications (e.g. MSc, MRes or PhD). •Enhance career prospects, due the above points.

500

120.0 110.0 100.0

100.0 % 450

90.0

400

80.0

350

70.0

233 mL

300

60.0 65 mL

250 200

50.0 220 mL

40.0

150

30.0

33.3 % 120 mL

100

20.0

50

10.0

0

0.0 K-Seal

Competitor X Competitor X Products * Published data from competitor Y. No data available for percentage of tests passed

K-Seal

Competitor Y*

It can be seen from Graph 1 that K-seal® performed the best with the lowest average total fluid loss of 285 mL and a 100% sealing success rate for 0.025in (0.635mm) diameter hole and 0.010in wide slot. In terms of fluid loss, K-Seal® performed better than Competitor X in sealing 0.010in wide slots and Competitor X performed better than K-Seal® in sealing 0.025in (0.635mm) diameter holes. Competitor X’s sealing capability was not consistent with a success rate of only 33.3%. Graph 2: K-Seal® leak test data from RLTM rig for 0.025in (0.635mm) diameter hole test plate

5. Testing Methods

18

110 Fluid loss by hole test plate

Tests passed 17 mL

Standard test method for assessing the sealing capability of leak repair additives, under the prescribed conditions of flow rate, temperature, pressure and time (the ASTM D3147 test rig is shown in Figure 3).

16

100 %

100 %

100%

100 100 % 90

15 mL

RLTM (Rapid Leak Test Method) rig RLTM test rig (shown in Figure 4) was developed in-house to facilitate rapid batch testing of K-Seal®. This rig also assesses the sealing capability of leak repair additives, but with the advantage of lower capital and operating costs, smaller footprint and considerably shorter test cycle time than the ASTM D3147 test rig.

ASTM D1881 Antifreeze Foaming Glassware Test Standard test method for evaluating the foaming tendency of engine coolants under laboratory controlled conditions of aeration and temperature, using standard glassware.

Antifreeze Compatibility Glassware Test Test method based on section 3.4 of commercial item description (CID) A-A-52624A, which is a U.S. government test procedure for assessing the compatibility of engine coolants under the prescribed conditions of time and temperature. The engine coolants being tested are assessed by visual inspection. Figure 3

Average fluid loss of passed tests (mL)

14 14 mL

80

12 70 11 mL 10

60

8

50

40 6 30 4 20

Desirable

Figure 2

Tests passed 570 mL

550

ASTM D3147 Leak Test Rig Figure 1

Fluid loss by hole test plate

Percentage of tests passed (%)

K-Seal®

Fluid loss by slot test plate

Desirable

• Test and competitors’ products using test rigs, so benchmarked. K-Seal®

Total fluid loss by hole and slot test plates

2

10

0

Figure 4

M

N

Percentage of tests passed (%)

• Manufacture and commission of test rigs.

Business Partner - Kalimex Ltd • Enable Kalimex to undertake test work normally only available to their larger global competitors. • Produced independent test data which demonstrated K-Seal® adherence to the relevant ASTM standards which enable Kalimex to reinforce and grow domestic market share and develop new export sales opportunities. • Improved technical support to customers from expertise and product data gained from the KTP project, which enhances Kalimex's reputation and translates into an increased level of repeat business. • The embedding of technical understanding gained from testing and literature surveys, supported the further development of the product and also enhanced Quality Assurance (QA) methods/testing.

600

Desirable

• Design test rigs and test procedures that meet relevant ASTM (American Society for Testing and Materials) standards.

Average fluid loss of passed tests (mL)

• Setting-up of Kalimex R&D laboratory.

Desirable

1. Project Aim

0 001

002

003

004

Batch numbers

It can be seen from Graph 2 that K-seal® was consistent at sealing 0.025in (0.635mm) diameter holes, with 100% sealing success rate across batches. Note that a comparison between the average fluid loss in Graph 1 and 2 cannot be made because each test method was conducted at vastly different scales. In terms of the total potential fluid loss, the ASTM test method is 100 times larger than the RLTM.

H G

Standard K-Seal® bottle (236ml) treats cooling systems of up to 20 litres capacity (e.g. cars, vans and LCVs).

K-Seal® HD bottle (472ml) treats cooling systems of up to 57 litres capacity (e.g. HGVs, buses and coaches).

E

D

F

C O

I J

P

B

A

3. Facts about

Q L

K

K-Seal®

• Launched in 2003. • Sold over 1.7 million units in the UK and over 2.5 million units worldwide since 2003. • In use with AA (Automobile Association) breakdown service since 2008. • Recommended by 9 out of 10 stockists (independent research). • A leak repair by K-Seal® is guaranteed for the lifetime of the engine.

R

(A) Power supply and electrical control system | (B) Temperature controller, set to 88°C| (C) Compressed air pressure regulator, set to 103kPag (15psig) | (D) Circulation pump, set to deliver a flow rate no less than 30L/min | (E) 1.5kW Immersion heater | (F) Type K insulated thermocouple probe | (G) Test solution fill ball valve | (H) Pressure relief valve with reservoir pressure gauge | (I) Stainless steel test plate holder assembly | (J) Stainless steel test solution reservoir | (K) Stainless steel collection pan for collecting test solution lost from test plate | (L) Test solution drain ball valve (not visible, behind (K)).

(M) 6.4kg steel weight, which is applied to syringe plunger to maintain the test solutions at a constant pressure of 103kPag (15psig) | (N) 60ml catheter tip syringe, containing test solution | (O) Pressure transducer, which has a range of 0 to 200kPag (0 to 29psig) (not visible, behind syringe (N))| (P) Aluminium test plate holder assembly | (Q) USB Data Acquisition (DAQ) module connected to pressure transducer (O) and laptop, which carries out data-logging function so that a pressure profile can be produced of the experiment | (R) 100mL measuring cylinder for collecting test solution lost from test plate.

7. Conclusion The above data (in Section 6) has demonstrated that both RLTM and ASTM leak test method yield the same maximum hole diameter, which K-Seal® can seal satisfactorily, of 0.025in (0.635mm) . Therefore RLTM is a suitable alternative to the ASTM method for assessing the maximum hole sealing capability of K-Seal®. Graph 1 data shows that K-Seal® is capable of sealing 0.025in (0.635mm) diameter holes and 0.010in (0.254mm) wide by 0.5in (12.7mm) long slots satisfactorily, in accordance with ASTM D3147 test method. K-Seal®’s performance was competitive against its main rivals as its maximum hole and slot sealing capability were the same as the competitors. But overall K-Seal® is the best choice for customers, wishing to repair leaks in their cooling system.

| KTP Associate: Mr Nicholas Applin (research engineer) | Academic Supervisor: Dr Nicolas Miché (lecturer) | | Company Supervisor: Mr Mike Schlup (Managing Director) |




The jewellery customer journey and retail experience is the focus of the proposed project and the need to protect the Company’s routes to market, e.g. the Jewellers shop, against the trend of high street closures. The KTP will integrate new technologies within the design of a new digital communication and brand strategy. This will consist of the exploration and development of Technology Enhanced Applications (TEA).

DIGITAL BRAND DESIGN

Hockley Mint: KTP Full Proposal - Project Structure Brand Design TEA

- Brand Identity of 3 companies - Establish core brand values - Develop brand promice and profile - Heritage through typography - Integrated Advertising and Events

Brand Design

Discovery

Introduction Experience Design (XD) User Experience (UX)

Communication Design Technology Enhanced Applications (TEA)

TEA

- Augumented Reality (AR) in retail design (store/web/mobile) - Customer Reviews & Product Endorsements - Social Meida Community - Customer recomendation and Word of Mouth (WoM) Marketing - High resolution photography - In app data capture, analysis and statistical information - Memory books concept, connecting renewing of vows or marriage to bespoke publishing and social networks - Development of motion graphics and information design

Develop new business streams through integrated advertising. 1. Situation analysis Objective:Review the company’s market through a marketing report evaluating opportunities. Output 1 (Task 1.6):Short Marketing report and management presentation. 2. Brand Strategy Objective:Develop an initial strategy addressing inconsistency of brands.

Review

Experience Design (XD) User Experience (UX)

PROJECT AIMS

Communication Design: Integrated Print & Digital Strategy - data collection - print on demand services - bespoke print & pdf catalogues - print to web connectivity - tablet app development

Recent rapid adoption of very high resolution screen (retina) technology in mobile devices now presents new visual opportunities for product display and interaction such as with Augmented Reality (AR), so that a high quality, value added customer journey is possible in any location, whether on the high street or through a web site with a resolution many times better than with printed media.

3. Market Research Objective:Test initial brand explorations on customers Output 2 (Task 3.4):A cost benefit analysis of the technical provision with recommendations. Resource management efficiencies derived from analysis. Staff trained. 4. Brand Design Objective:Implement the new brand design strategy Output 3 (Task 4.5):Production and deployment of new branded product catalogues, exploring brand messaging to retailers.

FAMILY TREE

5. Technology Enhanced Applications (TEA) Phase 1-high-resolution imagery Objective:Investigate high-resolution applications Output 4 (Task 5.2):Early opportunities for use of high-resolution photography through responsive web solutions. 6. Design Phase 4-Digital Advertising Campaign Objective:Advertise new brands to extend market Output 5 (Task 6.4):Deploy viral advertising campaign, Social Media strategy - increase in new inquiries and sales. 7. TEA Phase 2 Objective:Design an Augmented Reality Retail App Output 6 (Task 7.6):Novel retail experience opportunities for customers. New customer product information for design team. 8. TEA Phase 3 Objective:Develop an Online Planning Tool Output 7 (Task 8.4):A unique personal service within the Jewellery sector, creating “stickiness” to the company’s retail offer. 9. TEA Phase 4 Objective:Launch Customer Print on Demand (PoD) Services Output 8 (Task 9.4):Capture unique data informing new product development. Cost saving of £14k. 10. Review the Retail Experience, Project Evaluation and Documentation

BRAND VALUE BOARD GAME

Consider what works effectively and ineffectively to reposition the brands.

Aims: To collectively decide the four sub values, within the four ‘core’ brand values - Trust, Intelli- gent Solutions, Heritage and Ethics.

Plan how the brands could work effectively in print, web, mobile phones and tablets. Think about the tone of voice, how the brands what to communicate their values.

1. Group into small teams of 2 - 3.

Consider the future changes of Hockley mint - two companies merging.

2. Cross out irrelevant sub brand values shown on the chart and add new sub brands.

What customers are each brand communicating to - retailer, jeweller or public?

3. Move sub brand values underneath the relevant core brand value headings. 4. Number sub brand values 1 to 4; 1 being the most important brand value. 5. Discuss brand values and number 1 to 4; collate results and share thoughts.

MERRELL CASTING

6. The Board Game begins. Place the counters onto the Board and decide who is drawing and who is guessing. 7. Pass the cards containing ‘sub’ values around to the team members who have chosen to draw. The person drawing selects a card and passes to all those drawing. 8) Those drawing attempt to depict the value visually. Paraphrasing can be used. 9) The first team to shout out the correct Brand Value moves around the Board. The more rele- vant the Brand Value, the higher the score. 10) The first team to the centre of the board wins.

COMPETITOR RESEARCH

FAIR TRADE

CUSTOMER RESEARCH

‘The traditional structure of the supply chain means that gold from several sources may be refined in the same batch, and it has been historically impossible for consumers to know where the gold in an item of Jewellery was mined’.

Questions Asked:

COMPETITOR WEBSITES SPRING FAIR COMPETITOR CATALOUGES ADDITIONAL SERVICES ASKING CUSTOMERS WHAT COMPETITORS DO EFFECITVLEY AND INEFFECTIVLEY

‘An NGO ‘Fair Trade Gold’ movement has surfaced, it’s crystallization fuelled by a burgeoning body of evidence that points to impoverished artisanal miners in developing countries receiving low payments for their gold, as well as working in hazardous and unsanitary conditions […] Artisanal gold mining activities are far more widespread and illegal. Any attempt to liaise with such miners, therefore is bound to encounter resistance from host governments’.

How can we help you to sell more rings? What changes would you like to see to the current Hockley Mint brands? What digital/visual selling devices would you like to see Hockley mint use? How do you interact with your customers? Jewellers – are less interested in Hockley Mint brands, are more interested in products that are easy to search, direct communication with Hockley Mint to access prices/ images/ orders/ discounts.

REFERENCE: Fair trade gold: Antecedents, prospects and challenges.

Retailers – interactive customer journey – images of high quality are essentially. Interested in the development of Wedfit logo’s, bespoke design service Both customers are looking to become more digitally advanced. Example research: In reference to the Bespoke Design service, he would like to see CAD drawings and renders to show to his customers. We discussed the idea of a login system, where images can be downloaded to show to customers. Quote: ‘Marketing tools are the images’ I asked about mobile Jewellery Apps, and this was not of interest to him, as he has a mobile compatible website. We discussed the idea of a ‘memory book’ print on demand services, which he was very interested in. He would like to be able to type in a ring product, receive an image of the ring, to show to the customer.

DESIGN IDEAS

BRAND DESIGN

WEB DESIGN

Brand names and writing brand copy.

Wire frame structrure

‘Jewellers’ notebook imagery.

Animation and film for web

The Fair Trade consetina booklet - ideas of hands and illustrations over the top

Brand designs for web

Book: Coloured sections marked by graphics. Images of the making process throughout. Photographs of the people making rings, portraits. Website: A section for findings. All of the parts can be accessed through typing in codes/ names Exploration of composition and design principles, e.g. visual hierarchy and grid. Tone of voice through visual language to represent the personality of the brands. Use of Semiotics through development of word brands, logotypes, graphic identities and symbols to signify the new brand values. Brand Messaging using customer semantics and development of strap/ tag lines leading to a copywriting approach which extends the brands, (particularly in advertising contexts). Application of colour theory, pattern and textures to convey brand personality on an emotive level. Typographic applications through the selection of corporate typefaces and typestyles, used to maintain brand parity across print and screen media.

Graphics for Web Responsive design



Knowledge Transfer Partnership Construc)on Youth Trust & London South Bank University Introduction C o n s t r u c t i o n Yo u t h Tr u s t i s a registered charity that helps young people access opportunities in the construction industry.

Theory of Change A theory of change describes the change you want to make and the steps involved in making them happen. It involves mapping how an organisation or project aims to deliver its intended outcomes and the assumptions that underpin this.

The Significance of the KTP

The aim of the KTP The aim of the KTP is to provide the Trust with a comprehensive Social Re t u r n o n I n v e s t m e n t ( S R O I ) analysis of its unique work.

References: : Jane Gibbon & Colin Dey (2011) Development in Social Impact Measurement in the Third Sector: Scaling Up or Dumbing Down? Social and Environmental Accountability Journal, 31:1, 63-72 DOI: 10.1080/0969160X.2011.556399 Jardin, C. & Hodgson, L. (2010). 'Stakeholder interests reflected in social impact performance measurement for strategic charity management'. Paper presented at NCVO/VSSN event, Leeds July 2010.t:

The voluntary sector is under pressure to increasingly deliver public services but this is coupled with the need to demonstrate that they are achieving value for money. The KTP is an exciting and interesting project as social return on investment (SROI) is currently a complex, time consuming and expensive process. The project will contribute to increasing the evidence on SROI which is necessary to simplify the process.

New Philanthropy Capital (2010) Social Return on Investment Position Paper, April 2010 New Economics Foundation (NEF). (no date). Our Work: Social Return on Investment. Available: http://www.neweconomics.org/ issues/entry/social-return-on-investment. Last accessed 28th March 2014

• The KTP Project team are: • Jemma Bridgeman, KTP Associate jemma.bridgeman@cytrust.org.uk • Alex Murdock, Knowledge Supervisor, alex.murdock@lsbu.ac.uk • Helen Devitt, Company Supervisor, helen.devitt@cytrust.org.uk

Social Return on Investment (SROI) SROI is an “analytic tool for measuring and accounting for a much broader concept of value, taking into account social, economic and environmental factors” (NEF).

Research Methods The experiments will use mixed methods to ensure the reliability and validity of findings, consisting of: • Long term monitoring of beneficiaries • Focus groups • Internet questionnaires • Case studies of Construction Youth Trust projects • Observation • Desk top research

Conclusions SROI is a methodology still under development. Construction Youth Trust have demonstrated that they are a forward thinking and innovative organisation by increasing the evidence base on SROI.




Developing a comprehensive assessment, monitoring and intervention package for people with Alcohol-Related Brain Damage (ARBD) living in a residential rehabilitation facility in Glasgow. KTP Associate: Lindsay Horton BA BSc MRes E-mail: lindsay.horton@uws.ac.uk Knowledge Base Supervisor: Dr Tim Duffy (University of the West of Scotland) Lead Academic: Professor Colin Martin (Buckinghamshire New University) Company Supervisors: Glenn Harrold (Loretto Care) & Marshall McDowall (Loretto Care)

Background, Rationale and Aims

Methods and Initial Results

ARBD is caused by chronic alcohol misuse and thiamine (Vitamin B1) deficiency and results in memory impairment and problems with everyday functioning, as well as reduced quality of life, depression and anxiety (Thomson et al, 2012). People with ARBD have the potential for full or partial recovery, providing they maintain abstinence from alcohol and receive appropriate support from service providers (MacRae & Cox, 2003).

The KTP project involves conducting a range of neurocognitive, psychosocial and functional occupational therapy assessments within the first 3 weeks of individuals moving into the ARBD facility and every 3 months thereafter for a period of 12 months.

The KTP project will provide Loretto Care with an evidence-based way of assessing and monitoring residents living in a purpose built ARBD rehabilitation facility, which opened recently in Glasgow’s East End.

Benefits of KTP Project The KTP project will contribute to the provision of high quality, personcentred care that promotes independence and is conducive to improved health and quality of life, through the implementation of evidencebased intervention strategies. The prevalence of ARBD is increasing in the UK and Loretto Care aim to become leading providers of residential care for ARBD. The KTP project will facilitate staff training to improve knowledge and skills within the organisation, as well as providing evidence to support future funding applications for an additional 3 Loretto Care facilities in the next 10 years.

The KTP assessments will inform individualised care plans, as they will provide feedback about each individual’s strengths, limitations and support needs. The initial findings demonstrate that residents’ mental and physical health status is varied, whilst impairments in memory and executive functioning and mild problems with activities of daily living are widespread.

Next Steps Following data collection, statistical analyses will be conducted to identify trends and patterns within the data, such as improvements in functioning over time. In collaboration with the Academic Supervisors, the KTP Associate will be responsible for preparing research articles for submission to academic journals. To date, three systematic literature reviews have been completed, two of which have recently been accepted for publication. Staff training needs will be identified through staff development workshops, and training sessions will be provided to Loretto Care staff via the University of the West of Scotland.

References: MacRae, S. & Cox, S. (2003) Meeting the Needs of People with Alcohol-Related Brain Damage: A Literature Review on Existing and Recommended Service Provision and Models of Care. University of Stirling. Thomson, A.D., Guerrini, I. & Marshall, E.J. (2012) The evolution and treatment of Korsakoff’s syndrome: out of sight, out of mind? Neuropsychology Review 22, 81-92.





Designapro - Personalize your product

Bora Shkurti Objective

Designapro V.1

(www.dartscorner.co.uk)

The goal was to implement new technology which would help differentiate the business from others in the same market and bring significant results in sales and productivity for the product customization channel. The proposed solutions led to Designapro , a web service which would totally change the front and back end of the current system, creating a very advance user-system interaction that would allow the personalization of many products simultaneously, particularly useful for team orders. Research shows that this is the first software on the market to incorporate advance features such as team orders and multiple discount structures. The software is implemented via a secure REST API that allows the back end processing of personalised orders to run in parallel which shortens the time take to complete and dispatch the full order.

The Partnership This project is a collaboration between the University of Bolton and Darts Corner Ltd. • Darts Corner Ltd is an e-commerce company which offers over 8000 darts products. It was established in 2004 and now is the largest online retailer of darts related products in the UK. • The University of Bolton has previously been involved in successful KTP project collaborations with several companies in the North West which have led to significant benefits for both parties. The University of Bolton has expertise in many relevant areas of the project, such as e-business, open source technology, enterprise systems, CRM technology, supply management etc.

Company benefits and outcome Increased sales, the renewed customization front end system enhances the customer experience making it easier for them to input orders correctly. The multiple discount structure will encourage larger orders. Taking both these together the increase in sales is expected to be significant. A pilot project with limited functionality which was successfully implemented in Autumn 2013 has shown a 65% increase on personalized items purchased. Improved productivity, the seamless integration between order processing system and the various manufacturing technologies required to personalise different products, is expected to result in cost savings from a reduction in wasted products due to transcription errors, a better allocation of staff resource to deal with bottle necks for particular personalisation technologies, a reduction in customer telephone queries regarding personalisation orders and an overall increase in the number of personalisation orders that can be processed daily by existing staff . Improved user experience, an overall more user friendly shopping/browsing experience. The system allows the user to individualize the item while personalizing it, making it easier to track individual items of team orders. New technology implementation, using the latest technology such as REST API architecture, HMTL5, CSS3, JavaScript, PDO etc.


Improving the Accuracy of Point of Care HbA1c Monitoring Devices for Diabetes Dr. Jennifer Peed (KTP Associate) Contact: Jennifer.Peed@quotientdiagnostics.co.uk Prof. Tony James (Lead Academic Supervisor), Dr. Steven Bull (Academic Supervisor), Dr. Phillip Vessey (Industrial Supervisor).

Introduction

How it Works

Diabetes is estimated to affect 347 million people worldwide including 3 million people in the UK. This figure is expected to increase sharply in the future.1

The Quotient assay is based on a fluorescent boronate affinity sensor. This binds to the HbA1c from a patient blood sample causing quenching of the fluorescent signal. This is recorded by the Quo-Test analyser to measure the %HbA1c. Patient results can range from healthy (4-6%) to diabetic (6.5-11%).

Glycated haemoglobin (HbA1c) is produced by a reaction in the blood between hemoglobin and gluocse. Measurements are used to provide an average measurement of blood sugar levels over a period of weeks/months. The main advantage to the patient is that fasting is not necessary before testing.

Figure 3 Reaction of fluorescent boronate affinity sensor with HbA1c2 Figure 1 Glycation of haemoglobin in the blood

Quotient Diagnostics Ltd designs and manufactures point-of-care analysers to measure %HbA1c in diabetic patients from a finger prick blood sample.

Project Objectives The Knowledge Transfer Partnership between the University of Bath and Quotient Diagnostics Ltd has the following aims: 1. Increase the sensitivity of the HbA1c assay by redesigning the fluorescent boronate affinity sensor. 2. Improve instrument to instrument precision by designing a new calibration procedure. This will ensure that Quotient Diagnostics Ltd continues to meet the evertightening acceptance criteria set by the NGSP (National Glycohemoglobin Standardisation Program). This ensures that all HbA1c tests perform to an acceptable standard. Certification is an essential requirement to sell the instrument.

Figure 2 Quo-Test HbA1c Analyser

Achievements of the KTP Project to date

The Future…

The KTP with the University of Bath has improved the technical knowledge of the HbA1c test at Quotient Diagnostics Ltd by providing a greater understanding of the chemistry underpinning the assay. The main achievements of the project to date are:

Short Term: It is hoped that the work towards redesigning the fluorescent boronate affinity sensor will contribute to a better, more accurate HbA1c test.

1. Solution to Invalid Assay Results: A small percentage of tests gave an invalid result for patient %HbA1c. In the worst case scenario, this elevated result could lead to incorrect patient care in a clinical setting. It also threatens NGSP certification. • The cause of erroneous assay results was investigated and determined to be caused by contamination during cartridge manfacture. • A simple chemical solution was implemented to eliminate the effect of this contamination. 2. Boronic Acid Reagent Analogues: The partnership has provided access to synthetic organic chemistry facilities at the University of Bath. Analogues of the fluorescent boronate affinity sensor have been synthesised and are currently under evaluation to improve the accuracy of the HbA1c test.

Figure 4 Fluorescent Boronate Affinity Sensor

This will ensure that Quotient Diagnostics Ltd continue to meet the NGSP acceptance criteria. Long Term: Further collaborations with the University of Bath are in the pipeline. The main focus will be to design assays for other blood analytes. This will add to the product portfolio of Quotient Diagnostics Ltd, and also contribute towards the academic understanding of sensing for glycated proteins. References 1. www.diabetes.co.uk 2. Edwards, R.; Blincko, S.J.F.E.; Glycated Proteins Assay. US Patent 5,877,025, March 2, 1999. 3. Quotient Diagnostics Ltd, Russell House, Molesey Road, Walton on Thames, KT12 3PE http://www.ekfdiagnostics.com/Quotient_Diagnostics_1260.aspx


Tackling the Role of Pigmentation in Ageing Skin – Developing a Novel Methodology Dr Anne Oyewole Ɩ KTP Associate Ɩ anne.oyewole@ncl.ac.uk Supervisors: Julie McManus & Dorothée Saint-Hilaire (L'Oréal), Prof Mark Birch-Machin (Newcastle)

KTP project - This 2 year project is a partnership between L'Oréal UK and Newcastle University (Dermatology department) which aims to improve and develop L'Oréal’s skincare and sunscreen range supporting the efforts of government-led public health campaigns to reduce skin-related health issues

Rationale

Results

Ultraviolet radiation (UVR) from sunlight can cause visible damage to our skin (sun burn and wrinkles) and

 A novel method was successfully developed to measure differences between pigmented skin cells of

is the number one culprit for premature skin ageing. The pigment melanin which gives skin its colour is produced within the skin in cells called melanocytes and these cells are vulnerable to UVR damage. A key challenge in skin biology is to determine the role of melanin in sun exposed skin.

varying melanin content using flow cytometry  Further optimisation of this protocol will allow us to measure the stress levels of pigmented cells from sun exposed and non-exposed skin sites to dissect the role of melanin in skin ageing  This KTP project highlights the importance of understanding the differences between different skin types as well as the biological mechanisms underpinning sun-induced skin ageing and long term damage

Commercial Benefit

UVR from sunlight can speed up the ageing process

 Establishment of methodology for the assessment of pigmentation status in human skin  Training delivered to the company as part of the embedding and transfer of technological expertise process which will fed into product development and assist future research Although there is large body of evidence supporting the protective role of melanin in skin, more recent evidence appears to show that melanin upon UVR exposure may elevate the level of harmful molecules

 Potential to develop innovative products to meet commercial needs and enhance skin health protection benefits

called reactive oxygen species which can cause invisible damage within our skin and speed up the ageing

 Increased sales revenue, particularly in the area of uneven skin tone (hyperpigmentation)

process. “Our aim is to determine the light scattering profile of non, weakly and darkly

 Significant long term cost efficiencies can be made to the company along with the potential to develop

pigmented cells using flow cytometry in order to determine melanin’s role in skin”

key business partnerships with UK SMEs, facilitating the transfer of knowledge and opportunity to tackle a major societal challenge through collaborative working

Non-pig

Weakly

Darkly

Innovative Development & Outcomes  The advancement of scientific understanding and capabilities in this project may help develop new Non

Weak

Dark

innovative products that reduce or prevent signs of ageing and sun-induced damage  As a result of increased scientific knowledge it is hoped that this technology and method will help

1) Different pigmented cells are run through a flow cytometer

3) ... and outputs cell data related to the size, granularity and other markers

identify and develop new actives in the area of skin ageing and uneven skin tone that help support youthful skin

2) The Cytometer measures the cells...

The development and establishment of a protocol for the evaluation of pigmentation status in human skin will enable us to develop new innovative products conferring skin health protection benefits


Intelligent Automated Test System Development Neil Smiley TDSi has successfully supplied systems to over 100,000 installations across the world. with several million card holders benefitting from TDSi access control systems every day.

Methodology

Project Challenges

Understand the current practices and procedures currently adopted by TDSi. From this metric, the developed system will be able to gradually increase performance from testing a single I/O controller to scale to 300 controllers or more.

The current manual process for testing software has the limitations when simulating large systems. The challenges are: •Understanding the Access Control and ATE domain; in particular the changing international standards and their impact on products and customer base •Understanding the commercial implication of manual versus automatic testing in the developing marketplace •Exposure to multi-asset protection market and the testing of security devices •Exposure to the supply chain; industry standard compliance and legislation •Communications at all level of the business from peers to stakeholders •Ensuring the project delivers the expected benefits and Return On Investment to the current business case

Access Control Unit (ACU)

Our Vision In a rapidly changing market with a stream of new technologies, the TDSi vision and mission reflects a business that is constantly forward thinking; assessing market trends, identifying new applications for existing technologies and the potential for new technologies.

Our Challenge To delivering Quality Assurance on large scale systems, to assure their stability, scalability and performance.

Our Project Aim The aim of the project is to develop an automated test system which will allow the users to dynamically adapt system specifications and requirements that will be tested using automated software programs which the KTP associate Neil Smiley aims to develop. Neil has the full support of Bournemouth University and both his supervisors Dr Hongnian Yu, and Industrial Supervisor Russell Marande of TDSi.

Knowledge transfer Key benefits of the proposed system • Improve quality and reliability • Decrease development cycle times • Improve customer satisfaction • Rapid scenario testing

The cooperation between Bournemouth University and TDSi will generate new ideas and systems thinking in order to better understand the inherent complexities of imposing security products on business processes throughout the supply chain.





s About Knowledge Transfer Partnerships Established in 1975, Knowledge Transfer Partnerships is a UK-wide programme helping businesses to improve their competitiveness and productivity through the better use of knowledge, technology and skills that reside within the UK Knowledge Base. A Knowledge Transfer Partnership serves to meet a core strategic need within a company and to identify innovative solutions to help that business grow. KTP aims to deliver significant increased profitability for business partners as a direct result of the partnership through improved quality and operations, increased sales and access to new markets. There are three principle players within a partnership: 

Company partner - this is usually a company (including not-for-profit) but in some cases it can be a health or education organisation or Local Authority. KTP supports a broad cross-section of UK firms, regardless of size;

Knowledge-base partner - this is a higher education institution, usually a University;

KTP Associates – Each partnership employs one or more high calibre Associates (recently qualified graduates), transferring the knowledge the company is seeking into the business via a strategic project.

About the University of Brighton A part of Brighton since 1859, the university has been guided by its original ethos to become the diverse, multidisciplinary institution that it is today, actively engaging with a local and global community according to shared values of engagement, diversity, participation, collaboration and sustainability. With 21,600 students studying across five campuses in Brighton, Eastbourne and Hastings, the university community demonstrates civic responsibility across the south coast and beyond. Our courses, from undergraduate to research level and from the humanities to the natural sciences encourage students to apply their academic knowledge to the world beyond university in order to become active, fruitful members of their local communities. Our multidisciplinary approach to research cuts across all schools and colleges. Our researchers are making significant contributions to social, cultural, economic and environmental wellbeing across a broad range of work, from the arts, humanities and social sciences to the physical and life sciences, technology and engineering. Drawing on our high quality research facilities and expertise, we work with a wide variety of commercial and public sector organisations, providing access to new and emerging knowledge and technologies. Because our focus is on applied research, the majority of it has practical applications and direct relevance to companies and society at large.


We endeavour to remain a trusted partner, believing that by building strong, effective and mutually beneficial relationships we are collectively stronger and better able to tackle the real-world issues that we all face.

About the University of Brighton KTP Centre The KTP Centre at the University of Brighton has responsibility for developing and managing Knowledge Transfer Partnerships across the university. Cradle to grave support is provided with our dedicated staff working with you to scope the knowledge transfer need, identify a suitably experienced academic team, prepare funding applications, recruit a graduate and administer all aspects of live programmes. The University has a long history of supporting Knowledge Transfer Partnerships and sees them as a key mechanism for engaging with businesses for mutual gain. For further information contact Dr Shona Campbell, KTP Manager: 01273 642435 s.e.campbell@brighton.ac.uk www.brighton.ac.uk/ktp/

About Economic and Social Engagement at the University of Brighton Brighton is a large university with a wide variety of expertise that applies to business of all sizes. Our business helpdesk staff will be only too happy to put you in touch with the right people anywhere in our organisation. We will follow up your enquiry and make sure you get an answer. Our offer to business includes business growth programmes, training and development (in-house training and open courses), student placements & internships, collaborative research and consultancy.

For further information contact the Business helpdesk: 01273 643098 businesshelpdesk@brighton.ac.uk www.brighton.ac.uk/business/


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.