Page 1

essayah hamid mir 05 10

write dissertation abstract on criminal offense please 05 12

get help writing a evolution writers 05 04

observe and report outtakes 05 14

compare and contrast book and movie essay example 05 14


goddard park ofsted report chichester 05 10


sport media group plc annual report 08 19

what should i write my editorial on 08 18

need someone to do my dissertation chapter on ability for 10 05 13

who health report 2018 corvette 05 10




evolution writers writers 05 04

make creative writing on criminal offense online 05 10

evolution writers writing reviews 05 05

ancient form of writing in mesopotamia 05 03

evolution writers is stuck in my printer help 05 05

helpfulness essay outline 05 12

mexican american war a push long essay rubric 05 13


jamaique reportage 2018 05 13

write my research proposal on accounting due tomorrow 08 18

afrikaans evolution writers 3 notes 05 04


cie igcse biology evolution writers 6 notes 05 04

world wealth report 2018 capgemini government 05 11


40 my ghetto report card tracklist adele 05 10

evolution writers mario sticker star world 2 1 help 05 04

essay article about stress 05 02

complete essays 05 09

help starting a research paper 05 03


thesis and non thesis degree 05 17

how does a brown evolution writers bag ripe fruit 05 04 2

baby ate evolution writers ink 05 05

cover letter examples for data analyst 05 08


apa research evolution writers format 2018 05 03 2


write my research evolution writers cheap 05 04

evolution writers of writing in mesopotamia 05 03

weng fen wish for patriotism essay 05 14

lucas essays on education 08 20



essay what want to be 08 20

Tompkins Cortland Community College ​one of the things that I wanted to talk about is a lot of people are buzzing about machine learning right now we're excited about it we're trying to apply it there's lots of in-house projects that are looking to apply methods that already exists to solve big problems we can smell a machine learning problem we know because data acquisition is really expensive and we've got a ton of data now what methods do we do we apply to solve these problems that's the hard part and one thing that doesn't work very good is just opening up a textbook or taking a course then applying the methods directly from there to solve problems and the reason for that is that engineering problems require a bit of a different angle and so what I describe cielito is doing is really machine learning for engineering applications and I'm going to talk a little bit about what differentiates just standard machine learning from machine learning for engineering applications it's really there's three areas here massive data certainly hello certainly we have a lot of data in any machine learning application the thing that's kind of unique is our data in in in engineering a lot of its streaming data you know we collect historic data but we're also collecting data in real time as we're running our analyses and we need to be able to analyze those in real time second thing is this stuff's complicated but the data here we can make very few assumptions about about its nature there are a lot of dimensions there's all kinds of tricky things there and the third thing is that you know in engineering our bridges need to stand our chips need to work and we can't really bet our designs on on estimates we have to have really good evidence that our answers are correct so I'm going to talk about these three things which I think really characterize the difference between standard machine learning and machine learning for engineering so massive data well we certainly have massive data like I said in every machine learning type area but what we've got here is imagine a whole bunch of spice simulators you know 2,000 of them working in parallel working on solving a problem for a chip that we've never seen before on a manufacturing process that we've never seen before certainly we can gather some information for how things have behaved in the past and start to shape our models from that but there's a lot of real-time beta happening here so what we're doing essentially is real-time machine learning we're streaming in information and building models in real time so what we need to be able to do do this is things like optimize streaming parsers things that could actually read this data efficiently as it's coming in paralyzed double algorithms a lot of the machine learning technologies that are out there don't actually paralyze that well they run on you know single CPUs or small number CPUs we need this to run on thousands of CPUs to keep up with the the rate at which data is coming in we also need really you know scalable cluster management being able to actually distribute and dispatch all these jobs while bringing everything together into a single central model is a very hard problem in itself we need automated recovery and repair things go wrong when you're streaming real-time data and you need to be able to say okay that's okay that that went wrong I'm going to I'm going to figure out what to do about it sometimes you get incorrect answers coming in which can pollute your models that you're building and being able to filter that and adapt it and adjust it and correct from it is really hard this is something that you have to do in this space and it's very hard to do and then also being able to figure out what went wrong and being able to debug real time streaming data it's quite quite challenging so that's the first thing that you have to be able to deal with when you're doing machine learning for engineering or certainly in the chip space the second problem here is complexity so here we've got in imagine a chip I mean you've got lots of transistors if you're looking at process variation a space that we live in there's there's each one of those transistors has got models for how it varies and so you know at chip that's got thousands of transistors you might have tens of thousands of variables these all interact these aren't simple standalone type type of responses that we're looking at they'll interact and they interact in tricky ways not nonlinear interactions interactions with discontinuities very very tricky stuff and so what we need to be able to handle this is technology that can first of all do an effective design of experiments given given this large space how do you start to collect some information about it you can't just go run everything because now you've defeated the purpose we have to pick some places to start and so so some really good design of experiments experiments technology is really important the second thing is we we need advanced supervised learning techniques basically what this is is a bunch of different ways of modeling data some of which are very accurate some of which are very fast very scalable different methods and different methods that we can actually apply to a single solution we don't just use one sometimes we'll need to sort of work with multiple different types of models that kind of filter down into the thing that we want um intelligence screening and filtering one thing that we don't want to do is throw away important variables variables that will be important in under some conditions they don't look important in our initial experiments but as we

starts owning in on our areas of interest they might become important so how do we make sure we don't filter the wrong things we've got sometimes tens of thousands or hundreds of thousands of dimensions we need to filter but how do you make sure you're not filtering the wrong things one of the things that's really important here for handling this data complexity is having a benchmarking infrastructure that can actually you know have a really quick way to test what you're doing against you know things that happen in the real world we've got twenty thousand test cases that run automatically that we can run you know in whole or in part overnight on large clusters which is really useful and and the other things I mean I guess the last thing is just a big toolbox if you have lots of tools a whole bunch of different tools that you can use and you know how to put them together you've probably got a good basis for being able to handle this kind of complexity that we've got in this space it's not an easy space to apply machine learning to and then the third thing this is probably actually the most important thing if you're going to actually implement a machine learning technology in a production flow is you have to have the right answer if the answer is not right people won't use your answer and so it's great you know if you can say hey I'll save you know ten times or make something a hundred a hundred or thousand times faster or save weeks if at the end of the day there's risk that you might have a Rhys pin as a result or risk that you might be massively over margining still people won't actually adopt that solution so how do you do that given that with machine learning techniques we're essentially building a big estimator this isn't the real answer anymore we've got a big estimator well you can't just give people that and say well distrust it you know the data around that region looks like it supports this answer that's not good enough for a lot of engineering decisions so we need actually is accuracy aware modeling techniques and this is hard there aren't that many accuracy aware modeling techniques in the world today not in textbooks you have to invent them you have to come up with them we need active learning approaches where we can incremental e figure out where areas of interest are usually they're around our worst cases you know show us where my chip show me where my chips likely to fail and then go get lots of resolution in that area so we want to actively go direct experiments into into areas of interest and having active learning methods that are really really good at targeting problem areas is super important the third thing this is actually really might be the most important thing that I'm going to say today is self verifying algorithms and the reason for this is if you can't prove to an engineer that the answer is right they're not going to take the answer even if you can describe the technology and it's been right before it's been right before a lot of times maybe it's been right before thousands of times how do they know it's right in their case well if the algorithm can't prove its correctness people are really hesitant to use that technology and bet their design on it so if you can design things design algorithms that actually are verifiable and in fact can implement the verification as part of the technology so that when you give an answer you don't just give an answer but you show that it's the correct answer you prove at runtime that it's actually the right answer that's really compelling to people and that's what makes this stuff work in production um and and then also you know being able to prove through lots and lots of data helps you know having lots of cases where you can show that it's worked not just on your data but actually on customer data having having or data on site real production data that you know is yours and you know is based on your chips and your processes and your design practices then then you can believe this a lot more so you know being able to run hundreds of cases or thousands of cases and show that it gives the right answer again and again at route force compared to brute force it's really nice for building confidence as well so that's that that's really the three main main problem areas we've got our massive data we've got data complexity correctness and we need technologies above and beyond what's in the machine learning textbooks and and that research today in order to actually build machine learning based solutions for for engineering cielito has been at this for for twelve years I was at Toledo for eleven of those and we've implemented these solutions into two product lines our first one is variation designer that's a production tool that uses a lot of different machine learning techniques for for differentiating against the status quo people don't come to cielito because we have the same solution as everyone else they come to us because our solutions are ten times or a hundred times or a thousand times faster but still accurate and still provable you can still make engineering decisions based on the results the second set of tools is the newer one we just announced this year it's called the machine learning characterization suite and this is basically applying a lot of what we know a lot of our tools in the toolbox to the problem of library at timing characterization and we can we can speed up that process by weeks as well with machine learning technologies this is for standard cell memory i/o take the type of problems another thing that we're doing that we just announced recently is ml Labs we know there are a lot of other problems in this space that are that our customers have that still exists in the world that aren't solved yet by machine learning technologies and there are a lot of initiatives to apply machine learning and ologies to try and solve problems and what we want to do is act as the glue so so basically be able to to take you know customer problems collaborate with them on it you know run a proof-of-concept using real data make sure that it's actually a reasonable direction and then ultimately bring new products to market in new spaces so so that's a that's a new initiative that we've got at

cielito as well on the machine learning side so it's just a little bit of information about about machine learning for engineering sharing some of the things that have made that successful in production things that you really need to think about if if you're looking to apply machine learning solutions to solve to solve your problems Thanks use the improvement and pessimism reduction so what does Seletar do specifically with the same decent models and variation blocks that others don't and what do you improve that exactly yeah that's a really good question so as when designers are faced with uncertainty the over margin chips need to work and so that so I think the way to to best address this is that in order to make designers want to reduce margins you have to give them a lot more certainty in the quality of the answers so with machine learning technologies you can get a lot more coverage let's say you've got a fixed amount of schedule you can get a lot more coverage of the space which can give you a lot more information a lot more confidence that you've actually found all of the worst cases you know how everything behaves then you can make much more aggressive design decisions that's the biggest thing that leads to over margin reduction we've got tools that analyze that we've got tools that help people fix it as well but it's really about that confidence and reducing uncertainty that's where the gold happens because designers make more aggressive decisions when they have they think they've got the right answer SUNY Fredonia.