Internship Report

Page 1

Internship Report The internship was made in Infineon Technologies AG with duration of 14 weeks (01.09.2008 - 05.12.2008)

Author: Levon Altunyan Matr. Number: 2217213 University of Duisburg-Essen

Supervisor: Signature: ..................................... Dr. Inj. Stefan Heinen Infineon Technologies AG

Duisburg, 08 December 2008

i


levon altunyan I N T E R N S H I P R E P O RT


I N T E R N S H I P R E P O RT levon altunyan

in partial fulfillment of the requirements for the degree of Bachelor of Science International Studies in Engineering Faculty of Engineering University of Duisburg Essen September 01, 2008 - December 05, 2008


Levon Altunyan: Internship Report, in partial fulfillment of the requirements for the degree of Bachelor of Science, Š September 01, 2008 December 05, 2008 supervisors: Dr. Stefan Heinen

location: Duisburg time frame: September 01, 2008 - December 05, 2008


Never Stop Thinking

ACKNOWLEDGMENTS

i would like to thank my supervisors - Dr. Tobias Scholand and Dr. Stefan Heinen, as well as Dr. Josef Eckmueller - without who, this industrial internship would not be so pleasant and fruitful experience. I am grateful, that I was able to learn so many new tools and concepts in short time. The main reason for which was the valuable guidance I recieved from Dr. Heinen. Furthermore, I would like to thank my parents, which have provided me with the opportunity to learn and face so many things. Last but not least, I would like to thank one special member of my family, namely my dog - Archi, for the inspiration he has been giving me, during the times of hopeless laziness, independently from the distance which is dividing us.

v



CONTENTS

i introduction 1 1 introduction 3 1.1 Preliminary Remarks 4 1.2 About Infineon 4 1.2.1 Infineon Duisburg 1.3 Goal of the Internship 11 ii the internship report 2 the internship 17

7

15

vii


LIST OF FIGURES

Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11 Figure 12 Figure 13 Figure 14 Figure 15 Figure 16 Figure 17 Figure 18

viii

Siemens HL microcontrollers 7 Infineon, site Duisburg 8 Main activities at Site Duisburg 9 Site Duisburg at night 10 Core Technologies at site Duisburg 10 Infineon’s University Contacts 12 Sysway Panel 17 Sysway Flow 18 Clear Case’s User Interface 19 Sysway’s Localizer User Interface 21 The SystemC Concept 26 Comet Software 28 Installation via the IT department 30 Cygwin Shell interface 31 Autoconf Dataflow 31 Automake Libtool Dataflow 32 Makefile Generation Script 34 Config and MakeTemplate files structure

37


Part I INTRODUCTION



1

INTRODUCTION

Due to the privacy policy of the company and the realisation of the real project, some information and part of the internship development do not take part in this report.

3


4

introduction

1.1

preliminary remarks

in january 2007 I met Dr. Tobias Scholand during the "Berufskontaktmesse" at the University of Duisburg-Essen. We have discussed what are the opportunities for gaining practical experience at Infineon Technologies AG. After an interview with him and Dr. Michael Speth an internship place, which I had unfortunately to decline due to some partial disagreements with my parents, was offered to me. I really wanted to accept that offer, but I had to take into consideration my finance dependence and the will of my parents at that time. Nevertheless, a year later, after almost completing my Bachelor degree, gaining more knowledge which could be applied in the practice and being more productive for a remarkable company such as Infineon Technologies AG, before considering any other opportunity, I contacted at first place once more Dr. Tobias Scholand and asked him if there would be any possibilities to be part of their team at this point in time. That is the reason why, in the beginning of my report I would like to express my gratefulness and to thank him, as well as my direct supervisor Dr. Stefan Heinen, that this concurrence of circumstances did not influence a later in time opportunity to work with them. 1.2

about infineon

infineon technologies focuses on the three main areas: Energy Efficiency, Communications and Security. Therefore it offers semiconductors and system solutions for automotive, industrial electronics, chip card and security as well as applications in communications. Furthermore, the company offers memory products through its subsidiary Qimonda. Infineon’s products stand out for their reliability, their quality excellence and their innovative and leading-edge technology in analog and mixed signal, RF! (RF!) and power as well as embedded control. A strong technology portfolio with about 22,900 patents and applications is characteristic for the company. with a global presence Infineon operates through its subsidiaries in the USA from Milpitas, California, in the Asia-Pacific region from Singapore and in Japan from Tokyo. In 2007 fiscal year (ending September 2007) the company achieved more than EUR 4 bn in revenues. Today, Infineon offers complete applications for communication and automotive solutions on SoC! (SoC!). Here, it’s not just the miniaturization of structures that drives progress - innovations in product design play a key role as well. Prominent examples are the integration of RF circuits with voice processing components for use in single-chip mobile phones and the development of VoIP phones. automotive, Industrial and Multimarket Infineon has been firmly committed to the automotive industry for over 35 years and has established a prooven track record. It does almost twice as much business with the automotive and industrial market than the average semicon-


1.2 about infineon

ductor supplier. Offering one of the broadest product portfolios in the business, Infineon is the number one automotive semiconductor supplier in Europe and the second largest supplier worldwide1 . communications Modern communications technology brings people together and eliminates geographic constraints. The quest is to ensure access to people and information anytime and anywhere. The wireless and access technologies from Infineon play a decisive role here. The activities from Infineon regarding connectivity are mainly driven by the business group Communication Solutions. With 1.2 billion

Euro revenues in the last fiscal year this group is active in a global growth market, driven by the increasing need for mobile music, mobile video, mobile internet with ever increasing data rates. Connectivity to services like television for entertainment and information and global positioning systems for navigation purposes increase the scope of Infineon’s solutions. By converging and integrating those services into a seamless network to be used by anybody the company is well positioned for future growth. communication solutions Infineon’s Communication Solutions business group develops, manufactures and markets end-to-end leading edge semiconductor products and solutions for cellular, wireless and wired communications enabling smooth transmission of voice and high speed data from the backbone of the telecommunication network infrastructure to the end user’s equipment. infineon’s com - Communication Solutions business group is subdivided into three businesses: Mobile Phone Platforms, RF Solutions (Wireless Communications) and Broadband (Wireline Communications). • Mobile Phone Platforms: The market segments of multimedia telephone and the entry-level telephone are here to be distinguished. In addition to baseband processors, radio frequency transceivers and power management chips as the classical semiconductor components, the spectrum covers platforms for mobile 1 Ranking according to Strategy Analytics May 2007

5


6

introduction

phones, including software solutions. This comprehensive system know-how for mobile phones of different performance categories and transmission standards is offered. • RF Solutions: The main products are components for radiofrequency (RF) applications, short-range radio technologies and TV tuners. RF components are primarily transceivers for the GSM, GPRS, EDGE and WCDMA standards. Cordless telephones, Bluetooth, WLAN, UWB and A-GPS are subsumed under shortrange radio technologies. In addition, RF power transistors for the base stations of mobile infrastructures are offered. For television reception, chips for analog and digital TV tuners are developed and manufactured. • Broadband: In addition to the traditional telecommunications standards and the components for the mobile infrastructure, numerous products for the currently most dynamic sector: broadband access technologies for network providers and end users, for instance SHDSL, ADSL, VDSL and Voice-over-IP, are offered. market trends - The cellular market is characterized by two trends: on the one hand, multimedia telephones require ever more powerful processors and increasingly complex software and system solutions. On the other hand, the important mass markets in Asia, Eastern Europe and South America are demanding economical low-end models. Infineon answers the growing demand for complete reference designs, including the software solutions, with tailor-made solutions to meet specific customer needs. The product portfolio includes components for cell phones and cordless phones, operating software and applications for cell-phone manufactures as well as consumer electronic ICs. Additionally, the portfolio includes chips for Bluetooth, chips for cellular base stations, DECT, GPS and WLAN. At the same time, telephone lines are no longer used just for standard voice communication but also for many services such as transmission of large data, multimedia applications, and IP telephony. Infineon is ranked the number one supplier of voice components for traditional telecommunications infrastructure including POTS, ISDN, and T/E Carrier. Delivering analog, digital and mixed signal ICs along with comprehensive software suits, Infineon enables system manufacturers to design high quality, high-speed data and telecommunication systems solutions from the metropolitan backbone up to the customer premises equipment. The focus in this area lies on enabling end-to-end IP networks and value added Triple Play services for voice, video and data. Being a leader in broadband access solutions, Infineon delivers semiconductor products for all flavors of xDSL (Digital Subscriber Line) as well as Carrier-quality VoIP solutions for line-cards and CPE applications. Infineon’s advanced ICs and design kits for gateway applications allow the integration and interconnection in wireless and wired digital home networks to enable broadband communication in the home.


1.2 about infineon

1.2.1

Infineon Duisburg

site duisburg develops semiconductor solutions for wireless and wireline communication, microcontroller solutions for automotive applications and accommodates the European Sales Department for Automotive. In 1976, Sales Automotive Solutions (15 em-

ployees) at Siemens D端sseldorf starts selling microcontrollers, automotive power products, sense and control products and discrete semiconductors. EPOS GmbH & Co KG (subsidiary of Infineon, 50 employess) establishes in 1998 as joint venture of ELMOS (Dortmund) and SIEMENS HL focuses on the development of embedded power microcontrollers for automotive solutions, as well as mixed-signal and flash developmnet for 8/16/32 bit microcontrollers (see fig.1). Three sites, DC D in D端sseldorf-Angermund founded in 1985, Epos GmbH and the actual European Central Sales Office, formerly located in the centre of D端sseldorf, are under one roof in the south of Duisburg (see fig.2) since February 2005. All development and sales activities from now on are concentrated at this new site just in the middle of NRW.

Figure 1: Siemens HL microcontrollers

7


8

introduction

the development center nrw is a fully functional development site (CE, RF, DFV, FWV, MT). It is TL 9000, ISO 9001, TS 16949 certified. There are over 200 employees working on Concept Engineering, Product Development, Product Test and Product Engineering, Access Customer Support and Maintenance Responsibility including Support Functions such as QBE for quality assurance and project office and FC&IT (Financial Control & IT support). The entire development process from Product Idea to Ramp up in DC NRW Duisburg is accomplished by highly qualified engineers and expert groups who have graduated from renowned universities for electrical engineering mostly from around NRW.

Figure 2: Infineon Duisburg

infineon is playing a key role in the expanding wireless world by actively driving and supporting market trends with sophisticated semiconductor-based solutions. The development Center located in Duisburg is the leading site at Infineon for the development of Systemson-Chip for 3rd generation mobile phones and beyond (see fig.3) The vision of the development center NRW is to be a strategic partner for COM & ADS business groups to support the customer’s and BU’s success. The site’s employees mission is to proactively improve their competencies and productivity in order to develop best in class communication products. Furthermore, a collaborative working style empowered by flexibility and ethusiasm is fostered. product milestones are: • Cordless connectivity: DECT, CAT-IQ, GPS • Wireline communication networks: T/E Carrier, ATM, ISDN, xDSL • Wireless communications: GPRS/EDGE, UMTS! (UMTS!), HSxPA • Embedded RF: GPRS/EDGE, GPS infineon nrw is well positioned and staffed to design, verify, test and support Systems-on-Chip that meet the different communication and automotive standards offering the highest circuit and system complexity. Infineon’s NRW special expertise include:


1.2 about infineon

Figure 3: Main activities at Site Duisburg

• Monolithic integration of mixed-signal and digital designs for best BOM system solutions • Major inputs for development methodology for ultra-complex products (e.g. virtual prototype techniques) • Low-power technologies for mobile applications (65 nm, 40 nm and beyond) • Design to cost: manufacturing costs, yield, chip area, testing • Complete turnkey solutions (e.g. DECT reference phones) • Broad system expertise (e.g. 3G) • Strong experience in design-for-test (e.g. IDDQ, Delay-Scan, RF-BIST) infineon duisburg develops leading-edge products for 2G and 3G mobile phones. Highly integrated signal processing devices must be optimized with regard to system costs and power consumption. Therefore, one of the main competences is the integration of full modems including the RF part of communication system on a single piece of silicon. In addition, Duisburg specializes in research activities to design tomorrow’s systems (see fig.5).

9


10

introduction

Figure 4: Site Duisburg at night

Figure 5: Core Technologies at site Duisburg


1.3 goal of the internship

core technologies • GSM: Infineon Duisburg develops leading components for singledie integration of Radio Frequency (RF) & Baseband (BB). • UMTS: Duisburg is Infineon Technologies’ leading site for the development of integrated circuits for the most recent UMTS! cellular standards. • DECT: Duisburg’s system development group offers complete system products for cordless phones. • RF: Duisburg is the site within Infineon which enables the integration of embedded RF circuits into CMOS SoC modems for GSM, UMTS and GPS. • ASIC development - Duisburg is a major development partner for Microsoft’s consumer products such as PC mice and XBOX360 systems. support activities • Wireline communication products: Infineon Duisburg hosts the European Customer Support Center. com nrw ce is responsible for development of state-of-the-art RF, Architectures and Algorithm concepts. located in one of Europe’s regions with the highest university density, DC NRW benefits from a rich network of renowned universities and research institutes in the field of electrical engineering and information and communication technology enabling intense collaboration in areas such as: • Project collaboration • Internships • Working students • Diploma theses • PhD studies international contacts In addition, Infineon Duisburg, has established close links to foreign universities such as the University of Tampere (FN), University of Beirut and the University of California Berkeley(see fig.6). 1.3

goal of the internship

The goal of my internship was to design and program a small part of a bigger project concerning virtual prototyping control and algorithmic

11


12

introduction

Figure 6: Infineon’s University Contacts

modules, which would allow easier integration of future changes and migrations to new projects. I had the opportunity to get familiar with the "Perl" scripting language, Unix and the GNU Makefile system. Furthermore, I practiced the knowledge I have gained at the university, applied it and most importantly extended it considerably. My intensive occupation with the build tool "Make" as well with the script language "Perl", were mainly used as a tool to improve the automatic generation of the currently existing GNUMakefiles build environment, part of Infineon’s virtual prototype projects. The main idea was that all units have some common parts in the GNUMakefiles used to compile them, which could be generalised, and some specific information, which could be provided by the user. This leaded to the first step of the project developing a small "Perl" based script. Next the seperation of the information took place to two files, Config.txt and Maketemplate.txt. The final result of my work was utilized by the current and most probably would be suitable for feature projects of the company. Other main point, concerning the purpose of this internship, is that I achieved a technical experience and received a knowledge, which can be done only in such an environment in the given sphere. I learned better how to work in a team, how to rely on myself and my colleagues, how to meet deadlines and how a work can be pleasant and fun. I have been part of a global high-tech company with highly skilled and fully engaged people. In addition I learned that it is Infineons employees’ deep and diverse knowledge which enables the company to drive innovative solutions in order to meet today’s semiconductor challenges and achieve the best


1.3 goal of the internship

results. I had the pleasure working closely together with my colleagues across several hundred kilometers in everyday life. I could experience the values of the company: commitment, partnership, innovation and the creation of valuable products. The working enviroment was outstanding. The casual and friendly atmosphere enabled innovative teamwork whish was also supported by the exceptional architecture of the office buildings and the splendid interior design. The chance of working with colleagues and partners from all over the world was also provided to me from the company. Furthermore, a flexible work model has been introduced in Duisburg to support the work-life balance. In this challenging environment, I had access to a wide range of courses and training opportunities. I had the chance of an atractive experience, permiting personal development in line with performance, potential and preferences. Equally important was the fact, that I received knowledge in a professional, scientific manner, thanks to the highly skilled technical staff, in the face of my personal supervisor Dr. Stefan Heinen. Last but not least, I learned how to concentrate myself in a working environment, a skill I lacked previosly. For that I would like to say THANK YOU to Infineon Technologies AG and the team I had the pleasure to work with.

13



Part II T H E I N T E R N S H I P R E P O RT



2

THE INTERNSHIP

the first week at Infineon Technologies AG, site Duisburg, for me, included introduction with the COM NRW CE (Communications North Rhine-Westphalia Concept Engineering) department and its highly skilled team. Some of the topics which were in the focus of their work included, among others, Innovative Concepts for 3G Baseband Chips, Best-in-class Algorithms Development, Ganzheitliche Optimization of the System Architecture. My first task was to get familiar with the scope of Sysway Release 1.1.1. Fig. 7 shows the Sysway v.1.1.1 Panel user interface. The best way for this was of course the best source of information the Sysway User Manual. In short, Sysway is Infineon’s System Level Design Flow. It is implemented compliant to Infineon’s Inway Design System. Due to the progress in deep sub-micron process technologies, complex silicon based and SW-dominated systems can be implemented nowadays. As a consequence in the recent years the effort for system concept engineering and SW development significantly increased. So far Infineon’s Inway did not address the system design. Therefore Sysway extends the existing Inway to cope with the complexity of modern silicon-based systems. Sysway enables the system projects to handle the SW and HW complexity of the systems in a reproducible and systematic way. It increases the productivity in the system projects and hence contribute to the reduction of the system design cycles and the design productivity gap. Sysway V1.1.1 covers the following system design steps shown in fig.8: • Algorithm Design: Development of algorithm models, frequency analysis and data flow simulation of signal paths. • Transaction Level Modeling (TLM): Development of Transaction Level Models that represent the individual peripherals of the system HW architecture. • Virtual Prototyping (VP): Development of system HW architecture model including Infineon’s Transaction Level Models and external TLM IP (cores, buses).

Figure 7: The Sysway v.1.1.1 Panel

17


18

the internship

Figure 8: Sysway Flow

• Embedded SW Development: Development of SW for target cores based on the VP platform. • Verification of Algorithm Models and Transaction Level Models: Ensuring equivalence between Algorithm Model / Transaction Level Model and RTL Model based on a common test bench. • Code Generation of System Level Views: Creation of system level stub models from a concise As Sysway extends the HW development flow, it addresses different user groups than traditional RTL to GDS flows. It is intended that Concept Engineers use the Algorithm Design, TLM and VP part of the flow. SW Engineers will be focused on the Embedded SW Development and VP work flow. System and Component Verification Engineers will run system test cases on the VP platform. And last but not least the HW Design and Verification Engineers will use the Verification work flow. They will benefit most from the common design system for RTL to GDS and system development. The HW Design and Verification Engineers can use the system models as reference for the HW implementation and verification. In particular, an interesting subtopic of it, for the further practical experience tasks was, the Sysway make concept. It introduces a makefile driven and platform independent library and executable build process. The make concept is used to build the libraries of the TL Models and the executables of their stand alone test benches. The libraries SystemC 2.1, ifx_basics and ifx_standards, which are required to build TL Models, are implicitly available within the make concept. The first days also included the installation and adjustments of the software needed for the near future tasks, such as IBM’s Rational ClearCase, Citrix ICA and Emacs. The starting point for my future tasks was to obtain access to the UNIX database. Therefore, I had to


19

Figure 9: Clear Case’s User Interface

"‘buy"’ one from the Infineon’s Münich online service shop first. As this was solved, I was already able to access the ClearCase VOB files. At this point in time I could take a full advantage of the functionality of the IBM’s Rational ClearCase software(see fig.9). With the help of this tool for revision control (e.g. configuration management, SCM) of source code and other software development assets I could load the objects under version control in ClearCase with their histories in the respective repositories the Versioned Object Base files. To make sure that the Windows limitations for the number of users are not important I set the respective ClearCase environment variables to their correct values. After this, a simple log-off and log-on procedure was needed and I was ready for the next step, to create a dynamic view using the ClearCase explorer. As the project’s source files are stored in one or more ClearCase data repositories using the created view, allowed me to access those files. Each time I was checking out, modifying, and checking a file back in, ClearCase was creating automatically a new version that records the changes. Version control was just one of the features that enabled our team to manage changes and coordinate access to sources. Needless to say, is that in a big project such as the one implemented by the team I was part of, had several hundreds of files and folders, which could have one or more version and to provide that the correct ones are selected for the particular need. In addition to ClearCase elements, my view could contain view-private objects normal working files and folders that are not stored in a ClearCase VOB which I needed in my future tasks during my Internship. To include a view-private file or folder, I simply had to drag it into my view from my Windows Explorer. The ClearCase Explorer shortcut pane, toolbar, and menus provided access to all the ClearCase operations I could


20

the internship

perform in my view. In addition, I could perform many operations using standard Windows functions. For example, double-clicking an HTML element performed the associated operation, such as opening the document in my Web browser. To copy a set of of source files into my view, I used the Update command, but not directly, rather using the command prompt interface. Useful commands, related to this were catcs, setcd, edcs, and update. Choosing and mapping as a virtual partition a new drive for the view was needed. This was followed by mounting the correct VOBs. After creating the view I was able to see the project data base and also the version tree. Finally, I had to set the latest baseline config specification from the teams newsletter. After applying the changes in the software w.r.t. to previous points in time, I had the correct version of the project files. Config file in the properties list of the customly made view was used, which assured that the correct rules for selecting version of element to appear in a view was set. Ofcourse, later on, during my Internship, after series of several changes of some of the files, also this initial copy of the config file used in the customly made view in the properties list was periodically updated - to mark the significant states of the corresponding latest baseline. This is the point at which, I would like to thank Mr. Joerg Heidemeier, for the fast response from his side concerning the UNIX account creation, needed for all this procedures. Shortly after setting up, this initial dynamic view, due to some reason the remote folders in the ClearCase Explorer were not temporary accessible. This was the reason why, as well as for optimization purposes, all this settings up has been repeated, but for the case of a snapshot view (local version) as well as for localizing the Sysway V1.1.1 software (see fig.10). I also invested some time learning the Citrix software. It was used as a Login software for Database Access which was running under Unix. After managing this part, I tried to make the needed register file responsible for the inter connections between the specific hardware, as well as other hardware parts and the respective driver. This simple looking task, to create properly the testbench executable and then to execute it took a little bit more time resources than initially expexted. For this reason, I tried some possible solutions, including several make scenarios concerning the local and the remote Sysway versions and also trying with my snapshot view as well with the dynamic one, as well as always "killing" the respective bash & Cygwin hanging process, manually, but this would be discussed in more details in the latter part of the report. Unfortunately, in the generated errorlog file for the slave part, the error from previous week was still there. Meanwhile I spended also some time in taking care for filling out the neecessary "‘ZEK"’ working hours sheets. I have discussed the error message from the compilation process of the respective module, and we have found out, that it actually occurred also previously, but there is already a solution for it. The only, thing left for me was just to reach Dr. Josef Eckmueller from the Neubiberg site (near Munich) for additional information concerning this issue. I could not reach him immediatly. As long as I was stuck with that I have been suggest by Dr. Heinen, that I could already make myself familiar with the


21

Figure 10: Sysway Localizer - User Interface


22

the internship

basics of the SystemC TLM modeling. The background is that for the current running project (ES2), of the current 3G baseband chip a new functionality called "ctrace" was needed, which was based primarily on the "usif" module. That is the reason why I contacted Mr. Frank Gersemsky (one of the other internship supervisors and part of the team) for an overview of the functional details. He was the specification owner for the module and expert concerning this topic. Once again, I would like to remind the reader, that due to the privacy policy of the company and the realisation of the real project, some information and part of the internship development do not take part in this report. That is the reason why, I would not go in more details concerning this particular modeling concept and actual implementation. Going back on the errors, that had to be resolved concerning one of the module, for example was: make[1]: *** [create_lib] Error 2 make[2]: *** No rule to make target ’simlib-debug’.STOP make failed

It meant simply that the build tools wanted to build an object file but could’t find all of the source files needed to do it. Consequently the tools would attempt to make the missing files but will then discover they don’t know how to do that, hence the error message about not having a rule to "make target". The message could happen for a number of reasons. It could be that the build files were wrong somehow or it could be that the build is messed up. Or it could simply be that one had forgotten to include a file that’s needed. There were also other error messages occuring after solving one by one the poping up errors. The idea was simple, to test if it is possible to set up a running compilation on a fresh snapshot view of the current project using the respectively modified load rules. As the experts for this project were Dr. Heinen, and Dr. Eckmueller, it made sense, that I was testing exactly this scenario on my pc, and reporting them, for the occuring errors as well, as for my feeling about what could be the actual error. In this way I could learn fast, how the system is basically working and the seperate components are interacting. The "know-how" and the experience of Dr. Heinen, helped us very much to cope with all errors in very fast manner, by eliminating all annoying messages, and actually running error free the make generation. For example, there was another error, which was solved by simply creating one missing temp folder, which even empty was needed to exist for one of the used programs. The error which was generated was of course again a little bit misleading, because it was stating that the permission to it is denied. As other examples I would mention: usage of libraries which do not exist, too long string names, xml file missing, not thorough load rules, some white spaces after the unit defintions in the GNUMake files, etc. I managed to contact also Dr. Eckmueller, who was an important part of the team. I provided him with a typical generated logfile, after a "make cleanall" command, characterising the error which he had already a solution for. We were, near the goal of making all makefiles, for all units (more than 20), each


23

of them having some own particularity, to run without any occuring errors. There were still some opened questions, but they were solved accordingly, such as the automatic generation of the generic c files, some compilation performance, removing the subpression of error messages in the make files, etc. Then, I had to add to all make files some editing, so that the subpression of error messages is removed. In this way, it was much more easier to trace some error messages, which were previosly invisible for us, and in this way hard to be resolved. The command, which Dr. Eckmueller propossed, was simple, just "2>&1;", but one should be carefull to insert it at the right place, as well as in about 60 files. The list, of modules was clear, this was needed for all modules part of the /project_lib/vast/GNUmakefile. In some make files I have observed, an older/different make file structure. We were not sure if this test benches were still really in use and if a dedicated test bench was needed indeed. That was the reason why, we first discussed if update is neccessary, and if this holds for all of them. After deciding which are still needed, the update procedure started. Due to the different make file structure mentioned before, the general direct addition of the needed command would not lead to the removal of the suppression of error messages in the submake files. Therefore, some new make files were created first. Finally, I had to add them and also to include the respective ClearCase rules, so that the new versions of this files would be selected after updating the view. For this tasks it was helpful to have some background information about Sysway (and Inway). In the flow there was the so called unit concept. Each unit is a design entitiy, which can have different views (systemc, xml, vp, ccss, vhdl, specman, > ...> .). For each view of a unit a standard directory structure exists, which covers the primary sources as well as the generated data. For system design the xml, systemc, vp and ccss view are basically needed. As for an example for some units the xml, systemc and vp view exist, the ccss view not. The standard directory structure for the xml and systemc primary sources is defined as follows: $WORKAREA/units/<unit>/ |--source |---sc | |---beh | | |---GNUmakefile | | | |----tb | |---GNUmakefile | |---xml |---GNUmakefile

For the SystemC build environment we were now able to exploit the fact that we have units with a standard setup. To build an executable for a SystemC test bench we just need to know, which units are needed to link the executable. Based on this • the required include paths can be derived


24

the internship

• the library paths are known • the lib names which are needed in the linking step are defined • additional user input is just required to provide extra information (e.g. aeneasbaselib, additional linker options, addional include and lib paths, > ...> )

In the build environment the UNIT and ADD_UNIT parameter describe all units required to link the test bench (ALL_UNITS = UNIT + ADD_UNITS). There were of course some exceptions in the VOBs which have been taken into account, for e.g. the path is added to the EXTRA_INCLUDE variable. Also, other exceptions occurred. There were some makefiles in which it was not checked if there existed or better said if the source release-file hwapi1 (HardwareAPI) was needed. Due to the fact that some modules were using also libraries from other ones, but ofcourse this was not the case for all. So the make file would show an error. That is why the makefile needed to be modified in a way that it considered if the hwapi file exists in the standard SystemC directory of the source unit. This generated errors, which were preventing the proper make compilation were unacceptable. It was a task for me to write the code which would overcome this small flaw. As discussed with Dr. Heinen, I had to verify that the error is now gone. One thing I did not observed in my first try was that I didn’t use a <tab> key to precede the rules, instead I have used spaces. Dr. Heinen gave me the hint, that this probably won’t work in this way. Further improvement for the makefile would be to suppress the command echoing - I did that with the char "@" at the beginning of a line. In order to notify the user that there is no release-file available I have as an output a warning using the shell command "echo". In general, the makefile in the end should run trough for all modules without error for the different targets. Currently this was not yet the case. Possible solutions were modifications in the makefile itself or generation of dummy files in the respective modules. Nevertheless, after thinking about the problem and taking into consideration some important rules, the implementation was done. The only thing left was to include the -verbose command so that everything that has been made would be also printed out on the screen. In this way the finding of errors was facilitatated. I choose to modify the Makefiles which was the better solution. release-sources-unit-%: @if [ -e $(WORKAREA)/units/$(@:release-sources-unit-%=%)/vprel/sc/hwapi-releasefi les.txt ]; then \ sh -v -c "cd $(WORKAREA)/units/$(@:release-sources-unit-%=%)/vprel/sc/;\ $(WORKAREA)/bin/mkrel --mode=copy hwapi-releasefiles.txt"; \ else echo "WARNING: There is no source release-file 1 additional constructs for the representation of constants and grouping of registers and memories


25

available."; \ fi release-libs-unit-%: @if [ -e $(WORKAREA)/units/$(@:release-libs-unit-%=%)/vprel/libs/$(BUILD_TYPE)/$( build_type)-library-files.txt ]; then \ sh -v -c "cd $(WORKAREA)/units/$(@:release-libs-unit-%=%)/vprel/libs/$(BUILD_TYPE);\ $(WORKAREA)/bin/mkrel --mode=copy _ $(build type)-library-files.txt"; \ else echo "WARNING: There is no library release-file available."; \ fi release-commit-sources-unit-%: @if [ -e $(WORKAREA)/units/$(@:release-commit-sources-unit-%=%)/vprel/sc/hwapi-re leasefiles.txt ]; then \ sh -v -c "cd $(WORKAREA)/units/$(@:release-commit-sources-unit-%=%)/vprel/sc/;\ $(WORKAREA)/bin/mkrel --mode=checkin hwapi-releasefiles.txt"; \ else echo "WARNING: There is no source release-file available."; \ fi release-commit-libs-unit-%: @if [ -e $(WORKAREA)/units/$(@:release-commit-libs-unit-%=%)/vprel/libs/$(BUILD_T YPE)/$(build_type)-library-files.txt ]; then \ sh -v -c "cd $(WORKAREA)/units/$(@:release-commit-libs-unit-%=%)/vprel/libs/$(BUILD_T YPE);\ $(WORKAREA)/bin/mkrel --mode=checkin $(build_type)-library-files.txt"; \ else echo "WARNING: There is no source release-file available."; \ fi

After solving these and several other subproblems I was able to correctly build all needed units, as well as the regression test concerning compilation was also successful. Now, the updated configuration specification was released, and the load rules were updated. So I had to simply Check in the files, write a reason for the version change, check them out, add the respective entry for this operation in the virtual prototype’s configuration file. At this step it was important to take into consideration the difference of the element exampledir/... and element exampledir/ . In the first case everything contained in the subfolder was, selected excluding the current folder, in contrast to the second case where only a newer version for the current directory only without any files or subfolders would be selected. Last but not least, one should


26

the internship

Figure 11: The SystemC Concept

also check that the UTC time zones are correctly set, and then finaly to execute the update procedure, by setting the view to the most current configuration, with the setcs -cr command. This work, was also mentioned in the next weeks weekly VP newsletter, in the Lowlights section. The reason, as already pointed out, was that the Sysway 1.1.1 build environment based on makefile generation for VP module testbenches was still instable, so my job was to basically work on its improovement with the help of Dr. Eckmueller and Dr. Heinen. My next task was to check the different test units, for suspicious according to me behavior, after checking the Execute Log Files errors produced by the regression test script and to notify for this the module owners. So I have contacted individually them and listed the occurring messages which I had some doubts about in attached files, and kindly asked them if these were errors, or they were simply valid statements. As I was guessing, most of them were simply "on-purpose built in errors" in the local tests to check the error-message system. Of course it would be not ok if I was getting error messages other than the original ones, for e.g. after a modification in the code, but this was not the case. In parallel I also started with the introductionary reading about the basics of SystemC TLM modelling and related subtopics such as threads (especially preemptive multithreading) and events. Dr. Heinen kindly provided me with some reference guide literature concerning this topics. He also introduced me with some of the needed C++ concepts and syntax important for the SystemC concepts (see fig.11). Furthermore, I had to check, if each library was really needed by the new sc_Busmaster. The idea was to remove not needed libraries from the modules. In this way I allowed the linker to avoid linking extra libraries in a binary. This not only improves startup times (as the loader does not have to load all the libraries for every step) but might avoid the full initialization of things which are not needed at all. One possibility to solve this problem was to use the –as-needed flag passed to the GNU linker. The flag tells the linker to link in the produced binary only the libraries containing symbols actually used by


27

the binary itself. This binary can be either a final executable or another library. In theory, when linking something, only the needed libraries are passed to the command line used to invoke the linker. But to workaround systems with broken linkers or not using ELF format, many libraries declare some "dependencies" that get pulled in while linking. A simple example can be found by looking at the libraries declared as dependencies by gtk+ 2.0. The problem was that we are not using the UNIX compiler at all. Therefore, I had to check each submodule, and rebuild the sc_Busmaster, after removing one file at a time and check if it still builds succesfully (see fig.12). Due to the longer time needed, for every single test, and the vast number of files to be checked, one should in way learn some kind of Multitasking, while waiting for the respective response :). After all, there were 10 modules with 5,6 libraries and object files in average. For this tests we were not using the latest version of the ES2, so that is why i had to remove in some of my rules the redundundant include statements. The only thing which I could do to speed up the process was to temporary stop some services such as the Anti-virus one. After building and checking with which files the sc_Busmaster was still building, I found that 2 of the files were not neccessary anymore for a succesful build of the module. In addition, I got myself ackuinted with Infineon’s intranetsite, and some interesting facts about the company, and the Duisburg’s site in particular. Furthermore, I was taking part of each weeks VP meetings, which were taking place on thursdays with average duration of 35 minutes. At this meetings the weekly progress, important issues and hot topics, were discussed. This was the time, when everybody from site Duisburg, and Site Neubiberg, could ask directly via a netmeeting and/or telephone communication important questions, and get a prompt answers. Also the agenda for the comming week was discussed and the tasks were distributed amoung the team members. Despite of our previous work concerning this topic, we could still find many different behaviors/looks of the makefiles. Basically, if we really wanted to get them clean we had to "crawl" through all makefiles again and bring them to an identical state. Some mandatory requirements for an "OK"-label for the build environment were for that reason set up from our point of view. They included the following: • all error messages are displayed: That was already be in place, but needed to be verified. • compilation stops immediately when an error occurs: might have been overridden by option -k • subsequent makes: do not rebuild anything that was already built in a previous make pass • in case of an error: a subsequent make exactly resumes compilation at the point of the error


28

the internship

Figure 12: Comet Software


29

• clean cleans all: all generated files including makefiles, generated sources, etc. • all makefiles are: structurally identical Our feeling was meanwhile that we could not achieve these targets with the current makefile environment permanently since the mingling with the CCSS tool made it very difficult to maintain the modules and to develop them further. We also thought that there are some essential obstacles in the current build environment which made it difficult or even impossible to achieve the above requirements. To make it simple, we did not wanted to spent man-weeks and - months on this again and again. So, with respect to upcoming projects we decided, that we should finally resolve this ever-lasting problem. In our opinion a big obstacle in keeping makefiles consistent is that they were manually derived from a makefile template. Every change in the flow thus potentially generated considerable effort if the numerous makefiles had to be touched. Some ideas to overcome the observed problems were: • automated makefile generation from: template using the "imake" utility. • reduction of involvement of the CCSS-tool to a minimum, namely: source generation, makefile generation, slavesim generation. • replace ccss generation: by few dedicated tools for above tasks. – the make policy of target: dependency can be supported in a more granular way – requirement of: start/stop at point of error can be achieved • makefile targets: use the ccss makefile interface. – so far it was much more stable than e.g. the tcl interface – much faster than starting ccss in batch mode every time My task was to dig into the possibilities of the imake utility, and to find if maybe there is even something more powerful around. Our hope was that if the generation approach works we can simply take the makefile templates, modify them accordingly and then generate all makefile instances. After some research on the topic I found out that imake was a discontinued build automation system implemented on top of the C preprocessor. Imake was able to generate makefiles from a template, a set of cpp macro functions, and a per-directory input file called an Imakefile. This allowed machine dependencies (such as compiler options, alternate command names, and special make rules) to be kept separate from the descriptions of the various items to be built. Despite, this functionality I proposed, that maybe we should go for imake’s substitute - GNU autotools. The task was now, to set up the environment, check how the Autotools actually works, and test on an example unit part of the project. It sounded that it should be in fact already quite easy task. Taking into


30

the internship

Figure 13: Installation via the IT department

consideration that the current templates were already quite generic and for each unit we just needed some additional status information like dependency from other units and extra libraries to be included. The first, thing to be solved might sound trivial, but actually was to install Cygwin. This was a prerequisite to provide in Microsoft Windows a command line and programming interface familiar to Unix users. Furthermore, it was needed because due to it functionality of providing programming language header files and libraries making it possible to recompile or port Unix applications for use on computers running Microsoft Windows operating systems. The main problem, here was that the localised version of Sysway, as well as the provided by the installation supported by the IT department did not included some recent packages needed by the GNU autotools (see fig.13). An alternative was to try to install it privately, but this would lead only to registry corruption, and even more problems, which was not really an option. The only way was to kindly request Dr. Eckmueller for the inclusion in the Cygwin installation (coming with Sysway) , newer stable versions, of the following packages part of The GNU build system, also known as the Autotools, and their respectively needed dependancy subpackages: • Autoconf: 2.61-1 or later • Automake: 1.10.1 or later • Libtool: latest possible stable version This was not directly possible, for the current Sysway V1.1.1, but the work around was, that I could temporary use the custom XML file for


31

Figure 14: Cygwin Shell interface

Figure 15: The Autoconf Dataflow

cygwin2 (see fig.14). This provided me with the Cygwin version used in Sysway 2.0 with the latest tool versions and also much more tools than I had before. After checking the availability of the packages during a Netmeeting we decided that after roll-out of the stuff is done a proper configuration for the whole project team would be provided and the newer version would be available. After having Cygwin already set up, I had to take some time for a short learning phase, a tutorial on basic UNIX commands and principles. Then I used the best policy, the combination of learning by trying and making, to learn and understand the complex structure of the GNU autotools. This than had to be refined to the subtopics of Autoconf, Automake and Libtool, whose basic interconnections are showned on graph #15 and graph #16. After digging into GNU’s Automake documentation and discussing the current status of the Makefile Generation, I concluded the following resulting obstacles to our initial idea. First, it was that Automake did


32

the internship

Figure 16: Automake Libtool Dataflow

certain constrains: the project should use Autoconf. In addition, Autoconfig is used to adjust the makefile depending on different platforms. It would supply a different value when calling configure when testing the environment for certain functionality. Furthermore, Configure is used to probe the system. Automake would generate files from templates containing replacement variables, which would use the values from the system tests. This variables are passed to Automake either form command line options or from a thorough analysis of the platform. Most GNU make extensions are not recognized by Automake. Using such extensions in a "Makefile.am" would lead to errors or confusing behavior. As we have discussed during the telephone conference due to the current "in use status" of the flow we would definitely not like to start from scratch as I have initially tried to do. That meant that we would not really use the GNU Autotools in the sense for generating the "default" targets functionality automatically. My feeling was that the GNUMake files are already to highly degree generic, so they should be used as templates for this purpose as well as giving us the opportunity not to start from scratch. After clarifying this, I have checked imake. I could not find so many examples except one online book from 1996, but unfortunately it was partially available, just for introductionary purposes. (Software Portability with Imake) In my opinion the things which are done in the included Make.defs is pretty much what also the Autoconfigure, using the correct Macros for this purpose, should check and set the corresponding platform specific values for. That was the reason why, taking into consideration this, and also our will not to start everything from scratch, I suggested then to simply include for every single makefile a directory specific (corresponding to each make file) additional, let us call it for example: "Configuration.mk" file. In this way we would be able to separate the rules and targets which will use the system-independent definition variables. When make processes the include directive, it suspends reading of the containing makefile and reads from each listed file in turn. When that is finished,


33

make resumes reading the makefile in which the directive appears. I have tried this very simple approach with the CCU Behavioral Unit by commenting them into the GNUmakefile, and it of course it worked. I just had some small problem w.r.t the availability of the ifxmpu.cpp and the ifxmpu.h in the beh folder, due to the load rules. In the case they are available in the view, everything is correctly executed. This as I expected was not the actually the expected approach. Therefore, me and Dr. Heinen, have discussed once more the following options we were currently seeing to proceed: 1. Use automake 2. Try to use makefile templates without generation including config files a) symlinking makefile instances to templates b) on-the-fly copying makefile instances from templates 3. makefile generation using a script to be developed The first idea was difficult as we were not willing to revise the entire build process (which was also strongly recommend to avoid). After agreeing on the idea that automake is too generic and creates too complex makefiles for our setup and as we already work within a predefined environment the strength of automake provided only limited help in our case, we focused on the other two possibilities. The second one would been the leanest solution, but its feasibility had to be proven for a more complex unit like irx_hsdpa. The third idea should have been realizable with moderate effort also, but we should only go in that way if 2) failed. To summarize we had different types of information: • Module specific information (e.g. link options, add units, unit name) • Project specific information (e.g. aeneas base class) • General / flow specific information (e.g. systemc, ifx_basics, supported platforms) Finally, all makefiles should have been generated assembling the three different types of information. Maybe a combination of the 2nd and the 3rd idea maded mostly sense. After defining the possible solutions I started with implementation of a simple script written in the Perl programming language. The initial "alpha" version has been several times changed. After finishing the first sample script, it was used partially as the basic (represented at fig.17) idea the following implementations. The written Perl script (named mgen.pl) expected two files to be provided. These are the "Config.txt", which is unit dependent and the "MakeTemplate.txt" which should be valid for all units. The sections, which should be part of a particular GNUMakefile, are stored under


34

the internship

Make k Template

Config

Mgen Script

GNUMakefile N

GNUMakefile 1 GNUMakefile 2

Figure 17: The Makefile Generation Script Concept

%mksec hash. For e.g., if a "Common" section should exist in all GNUmakefiles it has to be specified as follows: %mksec = (’TB’, ’common, TB ’, ’BEH’, ’common, BEH, anotherSectionOne ’, ’XML’, ’common, XML, anotherSectionTwo ’); In the Config.txt file the target directory in which the selected sections should be created into is specified. For e.g., .Section (TB) (targetdir=<dir>). In the example which could be seen on fig.18 the "common" and the "TB" sections from the Config.txt and the MakefileTemplate.txt will be merged in the same GNUMakefile under "./TB" directory. The script uses a temporary file in which the current generated file is stored before being moved to its target directory. This file is deleted at the end of the script execution. One can specify also which sections should be generated per GNUMakefile in the %mksec associative array. Important for the particular GNUMakefile generation is that one of the listed sections that comprises it, should contain as well a specification of the target directory. The section containing the target directory should be last in the coma separated values list. For example, we have defined in mgen.pl, %mksec = (’BEH’, ’common, test, BEH’); Therefore, in the Config.txt we should define at the line delimiter specifying the beginning of the Behavioral section, also the target directory under .SECTION (BEH) (targetdir = ./beh). A good distinguishable from the surrounding text section delimiter is currently selected ".SECTION ()". This delimiter name could be easily adjusted later on in the mgen.pl at the $delimName variable if needed to anything the user would like to. Additional sections modifiers such as the "targetdir" would be extracted by matching an occurrence of the $optNames variable. Currently every line which starts with ".#" would not be copied to the final generated Makefiles. This holds both for the


35

Config’s as well as for the Makefile’s files. Nevertheless, the normal GNUMakefile’s comment (#, without a leading dot), would be copied in the GNUmakefile, verbatim as a valid one. The comments delimiter could be also changed, by specifying a new one under the $commentDelim variable. Each new section in the Config file is stored as a key in the %mySections hash. Using an associative array for storing the Config file, improves the code "readability" significantly. The corresponding to the keys values show the number of occurrences of the particular section in the Configuration file. The idea is that by this the number of repetitions of this particular section in the Maketemplate.txt file could be obtained. For example, we would like to use the same code specified in the Slave Section more than once. Thus we could simply have it once in the Makefiletemplate.txt and simply increment the indexes of the used variables. The text following a particular section’s definition in the Config.txt is appended to the already saved values of the corresponding key. If the same section definition appears later on, also this is correctly stored to the previous occurrence of information concerning this section. After reading the Configuration file we start to read the Maketemplate.txt file. At this point we start to generate the GNUMakefiles with all sections which have been specified for the particular instance. Always after matching a section in the Maketemplate.txt, its corresponding saved Config.txt counterpart is copied to the temp file. If this section has occurred more than once in the Config.txt file, then the current read information line from the Maketemplate.txt is stored in the %repeatPart hash. This is done due to the line by line reading mechanism of the Maketemplate.txt file and the lack of knowledge which part should be repeated before reaching the next sections’ delimiter. In case we had only one occurrence of the particular section in the Config.txt, we directly append to the temporary file the corresponding part from the MakefileTemplate.txt. Any variable or target, which is used inside these repeated sections, should be followed by the {INDEX} string. Thus, using the substitutions at each iteration copy, we could easily specify the information that changes in the repeated section blocks, and define them only once. This was the solution of handling more complex "Sections" with "variable" behavior such as the SLAVE in the context of a generic script. On the second pass through the MakeTemplate.txt file, the script adds all blocks that had to be repeated with their correct indexes to the "tempfile". After having everything needed, we could extract from the specified comprising sections the target directory where the GNUMakefile would be generated. If the directory which was specified did not previously exist, it would be created. Otherwise, the "tempfile" is being renamed to GNUMakefile and moved to the respective target location. As a final step the read only property is set to the generated GNUMakefile. The "use Win32::OLE;" package specified at the beginning of the script is needed for this step. This procedure is repeated for every specified in the %mksec hash key.


36

the internship

In general we wanted to achieve a concise Configuration file. That we could do by throwing out everything which was not absolutely needed from here. The second thing to be changed was the section’s delimiter name. We needed something good distinguishable from the surrounding text. As a second choice we decided this to be, for e.g. .SECTION(<sectionName>), and it was simply defined it as a regular expression saved in a variable, so we could easily change it later on to something different if needed. Since the initial script I have written was already longer than 100 lines, Dr. Heinen, has suggested to take out some of the redundant code, and use the optimized solution provided by him. The gain to this change was great, due to better readability, so that a person could see what was going on at a glance. Furthermore, additional code was still needed, and the use of an associative array for storing the config file, replaced more then 90% of the previous code, by shortening the length of the script about 3 times. After this I tried to add some more functionality to the "generic" script by adding the options which Makefiles to be generated, which "Sections" they would contains, under which directory they should be created. Finally I have defined the possibility of using a special comment character, which allowed us to insert some "highlighting" comments around the section headers, without influencing the generated GNUMakefiles as well as their default comments. We could now define for e.g. a "Common" section which was containing some default variable definitions and file inclusions. At this point I had to decide how exactly the separation of the Config and the Makefile Template should be, so the former contained only the things which the user should specifically give in, and the latter the things which are derived and specify the implementation of the GNUMakefile target rules. The idea was to move the more generic parts in the common section and to remove any comment. This reduced drastically the length of the specific unit’s Config file and increased the overview. The only disadvantage was that the "user" might not know what the syntax or the possible variable assignments were. But in our case this was a minor, disadvantage, due to the fact, that the "user" in this case, would be most probably an expert, such as Dr. Heinen or Dr. Eckmueller. Nevertheless, we stuck to the idea that it might be better to have also a "general" ConfigureGeneralTemplate file, with comments and some "hints" what could be defined in each section in general, which could be added to the Sysway’s documentation pages. The same holded for the MakeGeneralTemplate file. After having the script at a stage at which it looks quite "ok", I had a telephone conference with Dr. Eckmueller and Dr. Heinen to review the Makefile Generation Process. I also prepared a short PowerPoint presentation so we could have a better overview during our discussion. We agreed with Dr. Heinen, that the script should be generic as much as possible. That was the point I and Dr. Eckmueller needed to invest some more time, for the more complex sections such as the Slave and the XML ones, due to their varying behavior. So we had the following points achieved:


37

MakefileTemplate

CONFIG 1 2

3

.SECTION (COMMON)

.#

.(SECTION)(COMMON)

9

. # .SECTION(TB)(target dir=< for eg. ./TB >)

.SECTION(TB)

4

.SECTION(AdditionalSectionSameGNUmakefile)

5

.SECTION(AnotherSectionSameGNUmakefile) SECTION(AnotherSectionSameGNUmakefile)

6

.SECTION(SectionDifferentSameGNUmakefile)

7

.SECTION(Unused Section GNUmakefile)

8

Figure 18: Config and MakeTemplate files structure

1. A Small Perl based script. 2. Two files, for eg. named Config and MakeTemplate. 3. Each of them consisting of SECTIONS. 4. "GNUMakefile/s" generation from one or more of the defined SECTIONS. 5. Additional modifiers, for e.g.: specification of the Target directory. The final steps, before officially announcing the availability of the first version of makefile generation, were: a. to cope with the handling of more complex "Sections", with "variable" behavior such as the SLAVE and XML in the context of a generic script. b. Further optimization of "Config.txt" in direction file conciseness The general structure of the "Config.txt" and the "MakefileTemplate.txt" could be seen on the fig.18. The discussion with Dr. Eckmueller concerning the current status, gave us the bases of the possibilities to significantly shorten the lines comprising the "Config.txt". All variable definitions which could be used in more than one section were moved to the "Common" part. In addition, the length was reduced by specifying all additionally needed


38

the internship

extra include directories (EXTRA_INCDIRS), by the standard units which they are part of, under the ADD_UNITS variable. For example, if we consider the older "CCU" BEH GNUMakefile structure, any entry under the EXTRA_INCDIRS, from the type -I"\$(WORKAREA)/units/usi/source/sc/beh"

could be removed after assigning usi to the ADD_UNITS variable (Additional units needed to compile unit library). The "no need" of explicit libraries inclusion was automatically provided by the Make.defs file. Unfortunately, this trick would work for each TB, BEH, but only in the case we have to include "xmlgen/sc", but not for the case of "xmlgen/systemc". That is the reason why we decided that it might be a good idea that this deviations are also implemented in the future Make.defs file. Furthermore, by moving out all generic stuff, such as the VPATH, to one centralized place, we obtained separation between module specific and common information. On the other hand, If module specific, additional information was needed then, we could simply use the GNUMakefile build in append functionality, for e.g. VPATH += ../../../xml\textunderscore gen/systemc:../../../xml_gen/c .

The last step considering the "Config.txt" file conciseness was to remove any comment lines. After this I went back to the first problem - the "Sections", with "variable" behavior. We agreed with Dr. Heinen, that the best approach to this problem was to keep the idea of sections. In this way, we could specify in the Config.txt, under the Slave sections infinitely many directories or other variables followed by an index. On the other hand, the block of code in the Makefile Template, that should be repeated several times but with slight differences in the generated Makefiles, has to be specified only once. The places where the differences considering a particular repetition of the code may occur should be followed by the keyword {INDEX}. Specifying the existence of a particular section under Config one or more times gives us the number of needed repetitions copies of its corresponding section code in the MakeTemplate file. Using the %mksec hash principle gives us the freedom to have another common section especially for the SLAVE model, for eg, named Slave Common. In this way, we can take advantage of the build in functionality of the GNUMakefile system. The slightly differently, dependent in this case on the number of Slave directories, target implementations would be repeated several times in the generated GNUMakefile, and at the and would be used as the corresponding dependencies on which one general target rule, specified in the SLAVE Common section. As e.g., create_lib: create_lib_1 create_lib_2 create_lib_n. This gave us a solution to the handling of more "complex" sections. After this I have collected several "next steps" for the VP makefile generation issue. Here I would also like to thank Tao Tao (another intern) for the support he provided me with during the solution of these topics. After discussing the details, the following tasks have been started:


39

1. If a submake is started by a Master Makefile and if an error occurs during the submake execution then also the master makefile should immediately stop; (primarily taken by Tao Tao, and Dr. Heinen). 2. Construction of really "general" one major Makefile Template. This of course was partially related with the 3-rd task. 3. Configure file generation: a) "migrating" the 1.0 to 1.1.1 Sysway syntax, for e.g. the Objects from 1.0 could be already listed under the Sources (SRCS) variable. b) same holds for the 1.1.1 Sysway GNUMakefiles. The strange behavior reason for the first task could be easily tested by introducing a syntax error in the BEH makefile and then call the TB Gnumakefile, which of course in some point in time calls the BEH makefile. The expected behavior in such situation would be that the TB GNUMakefile would immediately stop. This unfortunately was not the case, the execution of the GNUMakefile rather continued. For the next two tasks there was already some progress. I continued with the 2-nd and the 3-rd task. There were many places where an error occurred after trying to use the the generated GNUMakefiles from the Config.txt. We decided that it is the user’s primary concern to write the config files in the proper way. For e.g., there should be always a line feed and a carriage return at the end of a text file in MS-DOS style, and a line feed for the UNIX Config version. Nevertheless after checking in the files the Clear Case software would automatically convert the files to the DOS standard text file storage way. If the user is not careful enough the last line from the Config file (part of a certain section, for e.g. "BEH") and the first line of the MakefileTemplate( part of the same section) would appear appended at the same line, which would cause some unexpected behavior. In such case, there might be no definition of some of the needed variables used in the respective unit’s makefile, for e.g. the LIBRARY_SW variable part of the BEH section could accidentally be appended to the EXTRA_INCDIRS variable. In this way the library files in which the object files are listed would not be created. Then the linker which should check for the object files from this library would simply not able to locate it. Another possible error cause was that I have not always used the most recent Config Specification, and thus I was not seeing all needed files, folders or their latest versions. Furthermore, a possible generation of mistakes was not paying so much attention to some specific units’ files name differences. Such an example, was the SRCH’s ccss_batch_gen.slave.config, Due to initially considering it as having the general naming from the other units, I have assigned to the SLAVELIB_NAME variable the value srch_slave, such that the ccss_batch_gen.srch_slave.config, which of course generated an error. In my opinion "composite" variables such as the UNITS = $(WORKAREA)/units


40

the internship

or sometimes also found in some units as UNIT_DIR, were not really feasible, that was the reason, I decided to simply use the "longer" path definition. Another problem was the not so clean "clean" target. One idea to overcome this was to generate a list of all the targets a mekfile creates and remove them at the end. Unfortunately if the file would not be generated correctly if the first time a mistake has occurred. As the .d, dependency files, all the files that contain in them the list of header files. If a source file is changed or removed the list should be updated. If a file was requested but not found, then it would be simply listed as a .h as if it is residing in the current directory. Another question that had to be solved was the correct syntax of the VPATH variable assignments. As a result it turned out that a colons or a blank separation between the different directories would be equivalent. Nevertheless, we decided for better overview as well as for easier differentiability in the case of the additional appending a particular path the ":" syntax (for e.g., VPATH+= :../../../xml_gen/systemc). Some of the unit specific make file targets where outdated or already included in the Make.defs, so they were needed anymore. To illustrate this idea, the "run" target was redundant because normally the gmake should be able to cope with this automatically. The $CODE_GEN_UNITS has been changed to $ALL_UNITS variable. For some units such as the sbram1 and sbram3 the xml directory did not exist. Therefore it definitely made sense that a check for its presence was initially done and only after that to go in this directory. But as already we have discussed this with Dr. Eckmueller this would be implemented in the future so it would be fine to remove the $CODE_GEN_UNITS variable and keep the $ALL_UNITS in which there is no sc/xml directory. There were also units as the olp that in general were not needed anymore. They were needed only due to their XML directory. The Purify and quantify options were outdated so that they should have been removed. As a rule of thumb every variable in the previous makefiles in which name the word "EXTRA" could be found, was a necessary input that the user had to manually provide. It was decided that the XML’s GNUMakefiles in general would be completely updated and changed in the future, so I was advised by Dr. Eckmueller not to consider them for the time being. Variables as the SLAVE_DIR, should be defined with an index (e.g. SLAVE_DIR_1), so that the structure is consistent also if the unit needs more than one Slave directory. When more than one Slave directories were defined one should be careful that the name changes are reflected to every other place this variable name is used. For the ’r99’ unit a windows specific library was used. Unfortunately the UNIX compiler used under Cygwins emulation, was not aware of this. The problem was solved by adding to the extra c++ flags the -D "_WIN32". I also encountered some linking problems, due to an object specified more than once. The reason why such an error would occur was that there were some *.obj files produced with the same name. In the ORX_REL99 I got some *.c files missing, due not listed folders, in which these files were residing, in the Config Spec. It was important to clarify the MakeTemplate file structure. For example Dependencies which are not


41

specified, or are simply empty variables would not hurt units which would do not need them, but would be used by others. In this way we could create a single file, which was general enough for both cases. If a particular section should be missing, then it should be skipped or commented in the unit’s Config file. It is important to mention that any target which has been specified not only in the Make.defs inclusion file, but also in the current Makefile, would be overridden by the latter definition. One of the final to be configured was the USIF, part of the Control Model units. As a further step, the files had to be checked in at the correct places, so that everybody could already see them in the VOB’s and utilize. After discussing the most appropriate place we confirm our initial idea that it is strongly recommended to have differentiation between the common parts, which are used by all modules and the module specific parts. The module specific part I placed in the $WORKAREA/units/<unit>/source/sc. The common part, I placed in $WORKAREA/etc under the ’scmakegen’ folder. In parallel I was keeping a track of the development of the task that Tao Tao adopted. It turned out that most probably the cause for the not sudden stop of the makefiles was the used shell function call inside the make file to call some other makefiles such as the "BEH". It was discussed that the preferred solution should be to get completely rid of this way of implementation. The best idea was that we should rather think make-like in a "target: depencies" way. After almost all config files were ready and the Maketemplate became relatively general, we decided to go for the next step. It was suggested that it might be a good idea if we are able to use the makefile generation, (changing the MakefileTemplate to reflect the corresponding changes introduced) in the Sysway 2.0. We decided that we can test my stuff in parallel to the migration of the first modules from Sysway 1.1.1 to Sysway 2.0. First of all we decided to migrate the module as it is. As soon as the stuff is running in Sysway 2.0, we could start with the updates. One of the updates could be my makefile generation. If it turns out to be beneficial and robust, we could officially add this to one of the next Sysway releases.

Supervisor: Signature: ..................................... Dr. Inj. Stefan Heinen Infineon Technologies AG Duisburg, 08 December 2008


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.