applstdef_emC.h - Basic headers adapt common sources to application and platform

applstdef_emC.h - Basic headers adapt common sources to application and platform

Inhalt


Topic:.applComplAdapt.


Topic:.complAdapt.


Topic:.applstdef.

The applstdef_emC.h is that headefile, which should included firstly in any source. It includes compl_adaption.h and contains all settings to determine the common behavior of the application. Therewith the sources of an application can be written for several applications, several systems and can be reused.


1 The necessity of common basic headers

Topic:.applComplAdapt.necessity.

C-Sources are used and should be used in several projects and environments in unchanged form. But often there are incompatibilities especially while using user-defined types for fixed-with integer types (for example INT32) or other language-special (not application-special) details. Another problem are some incompatibilities between C++- and C-language. Often sources should be deployed in C-environments but should be reused in C++ too.

Prevent "#ifdef MyPlatform" in applications

The conditional compilation is a often-used construct to avoid incompatibilities. For example a sequence of inline-assembling for the target-platform is fenced and replaced by a proper expression for a simulation environment. But in re-used sources such project- and platform-specific conditionals causes a distension of code for all possibilities. Such a source-code isn't able to read far. The source-code should be changed, a new revision should created, only because a next platform-project-condition is incompatible with the current conditionals.

The better way to do is: Using of a macro in the common sources, defining the macro in a substantial platform-project-specific header and include that header.

Usage of a platform- or application-specific headers in more as one appearances in several directories

The principle is: An unchanged re-used header- or C-file includes a header by name. The content of the included header should depend on the target platform etc. It is possible to have more as one header with the same file-name, but located in several directories. The platform is associated in any case either with the specific compiler or with specific make files. In the make-file or as command-line-options for the compiler, the include-path is specified. The include-path should refer to that proper directories, where the correct platform-depending header is located. In this kind commonly written sources are compiled with platform-depending properties. That is the philosopher's stone.

The behaviour of deep inner levels of code may be different depending on application decisions.

For example the kind of error handling is a decision of the application:

The application cannot use the C++ try-catch principle, if the target compiler does not support it. On the other hand for the target it may be ok to show a failure and stop execution. But for some error reasons while developing the software on the PC the try-catch concept is nice to have it. But the sources should be unchanged and should not contain a #ifdef __PLATFORM-harp.

Defining the approach in the ..applstdef_emC.h and including that helps, see example for exception handling in TODO.

There are two headers to define application behaviour and platform specifics:


2 Variants and include of this files

Topic:.applComplAdapt.variants.

The

#inclue <applstdef_emC.h>

should be the first include line of any header file and therewith the first include for any source file too. Therewith the behavior of the application is determined with platform independing sources.

The applstdef_emC.h includes the compl_adaption.h in its first lines. Both files are existend more as one time in different directories for the different target systems and applications.

The compl_adaption.h should be existent exact one time for any platform (target and PC) or more as one time for the same platform for different conditions. Especially using Simulink generated sources requires another definition of the basic types, therewith a simulink-specific compl_adaption.h for the platform is necessary.

The applstdef_emC.h should be existing either 1 time inside the applications files, or more as one time if the application should be compiled with other conditions on PC, target and several targets or several specifications on the same target. Especially the error handling can be different on test on PC (for example with exception handling for error debug, with C++ compilation) and the target ( for example run as possible though errors, or abort on any error, with less footprint).

For some situations pre-build variants of applstdef_emC.h are existent in the pool of emC sources in the directory emC/incApplSpecific which can be immediately used or used as template:


3 Problem of conflicting definitions of fixed sized integer types

Topic:.complAdapt.int32.

In the far past C does not define integer types with a defined bit width. The thinking and approach in the former time was:

That was adequate to the situation of non-microprocessor-computers of the 1960- and 1970-years. Because C was used for microprocessors with 16 and 32 bit register width with flexible registers, the decision that was made for C is no more adequate. As a work arround all users have defined their own fixed size int types with slightly different notations in very different header files. For the own application it is a perfect world, without thinking outside the box.

But if sources are reused, build applications from different sources, compatibity problems have to be managed, usually with hand written specific adapted solutions.

The C99 standard has defined these types (int32_t etc) 10 years after its became necessary, and this standard was not considered 10 or 20 years after its definition. This is the situation. The last one is not a problem of disrespect, it is a problem of compatibility.

Lets show an example:

What should be done?

The possible and convenient decision is:

For the user's sources it means:


4 Problems with platforms with non-flexible integer types

Topic:.complAdapt..

The X86 platform (Intel) or adequate offers a byte oriented access. Therefore an int8_t is possible, an int16_t is well defined, and struct can be packed.

But this is a unrealistic world. What really happens:

What should be done in the user software:

For example a struct should hold different values. The following is proper:

typedef MyStruct_t
{ float f;
  int16_t s1, s2;     //pos 4, 6
  double d;           //pos 8
  int8_t b1, b2 b3;   //pos 16..18
  int8_t __spare1__;  //fill pos 19
  int32_t __spare2__; //fill pos 20 till length = 24 = 3*8
} MyStruct;

The following order is not proper:

typedef MyStruct_t
{ int8_t b1, b2 b3;
  int16_t s1;         //pos 3 may be aligend to 4
  float f;            //pos 5 o5 6 maybe aligned to 8
  int16_t s2;         //pos maybe 12
  double d;           //pos maybe 14 maybe aligned to 16
} MyStruct;

Because:

If the memory is only organied with 32-bit access especially for hardware or Dual Port Memory access, a struct should not use int16_t or int8_t


5 Problem of processors with 1 address step per 32 bit (non-byte address step) and only 32 bit operations

Topic:.complAdapt..

Some DSPs (Digital Signal Processor) from Analog Devices have only 32 bit regitster and count the memory address in this 32 bit step. 1 address step is 32 bit, sizeof(int)==1. This is considered more effecient for the hardware.

If this processor stores a value which needs less than 32 bit, 32 bits are used nevertheless. There are 3 problems which messes up the programming in C:

It is possible to access the same memory from either the 32 bit DSP side as well as by another processor which can access bytewise. It is a special construct in hardware. The simplest form is a Dual Port Memory chip such as IDT70P269 from Integrated Device Technology. But a Dual Port Memory area as part of an FPGA (Field Programmable Gate Array chip) is similar. Another possibility is a hardware access to the RAM from another hardware bus than the DSP bus with an hardware direct memory access (do not confuse with DMA on the DSP chip itself). This should be regarded in the hardware layout of the board.


5.1 Storing characters strings, access it from a byte accessing processor

Topic:.complAdapt...

If the DSP stores character Strings in 32 bit per char, the memory layout for

char [8] = "abcd";

looks like

0x00000065 0x00000066 0x00000067 0x00000068 0x00000000 0x00000000 0x00000000 0x00000000

That memory layout is seen from the other side, from the byte accessing processor too, if 32 bit are mapped to 32 bit. If that memory content will be read as char const* as string information, only the "a" is seen, because after them a 0-bytes follows. That is not proper to use.

On the DSP processor usual no complex string processing will be done. But the DSP processor may report some errors in string form, able to read in english human language which is more comprehensible as only numbers for error reports. For example The message: "faulty value read: -99.999" should be shown. The message my be sent via Socket communication or presented in an display which is driven by the byte-processor. On that side no special effort should be done. It confuses the software.

On the DSP side, for that example, an constant text "faulty value read: " is to combine with a conversion of a number to a string. It may be done with special programming only for this case only for DSP but able to test in compatible form on PC. The sprintf(...) may not proper to use.

The challange is: storing the string in a proper way. The only convenient way to do that compatible for the 32-bit-DSP and a normal test platform (PC) is:

#define Char4 unsigned int
#define CHAR_4(a,b,c,d) (a + (((char4)b)<<8) + (((char4)c)<<16) + (((char4)d)<<24) )
Char4 msg_faultyRead[] = { CHAR_4('f','a','u','l'), CHAR_4('t','y',' ','v')
                         , CHAR_4('a','l','u','e'), CHAR_4(' ','r','e','a')
                         , CHAR_4('d',':',' ', 0 ) };

This is or may be proper read- and writable in the source, and produces a constant string literal which is packed in memory and is seen as normal String literal on the other side of the byte-accessing processor in little endian. To combine the message with a value the string should be copied via memcpy and the value should be converted with a special routine such as:

char4 msgBuffer[20];
memcpy(msgBuffer, msg_faultyRead, sizeof(msg_faultyRead));
appendFormatedFloat_Char4_emC(msgBuffer, sizeof(msgBuffer), 3,3,val);

The last one routine searches the first 0-bytepart in the char4 and appends the digits from the given float val. This routine is not complex, it can be found in the emc sources.

It seems to existing better solutions, but they have problems:


5.2 Conversion an int16_t to float

Topic:.complAdapt...

If a 16 bit value is used on a Analog Devices DSP, the value is stored in a 32 bit location. The higher 16 bits may be able to access, but they are not accessed by a 16-bit-operation if this operation masks only the 16 bit. For example an angle from -180 to 179.99 (degree) is stored in 16 bit (0x8000 is -180). To convert it in float, only 16 bits should be used to get a circular angle in range -180..+179.99.

Follow the algorithm with such an circular angle value. It is stored in an integer cell because the overflow produces the expected circular behavior: -179 - 3 should result in +178.

int16_t angle;       //an angle in range -180..179.99 in 16 bit
angle += anglediff;  //because the angle is really 32 bit,
                     //the value of angle may be outside -180...179
float anglef = ((float)angle) * 180.0f / 32768.0f;

For a platform which knows the really 16 bit integer type this result is correct. For the DSP the resulting float value shows the overflowed value too, in range outside -180..180. It is an unexpected behavior as result of this simple C source lines on an 32-bit-DSP processor if that hardware-depending behavior is not known.

The result is proper with the following line:

float anglef = ((float)(((int32_t)angle)<<16) * (180.0f / 2147483648.0f);

The really 32 bit angle is converted explicitely to a 32 bit value with left shift, to keep the same range. Then the result is correct. On that point of programming the hardware property of the processor should be well-known. For this reason the C99 standard defines some values like

#define INT16_MAX 2147483647

This value defined for the target processor documents that the type int16_t can be hold an value up to 2147483647 which is 32 bit width. But the definition of the number of bits for each type is not contained in the C99 standard. That value should be known. Therefore it is defined in the compl_adaption.h with

#define INT16_NROFBITS 32

With this information the conversion line can be written as

float anglef = ((float)(angle<<(INT16_NROFBITS -16 )) * (180.0f / (-INT16_MIN));

For the 32-bit-DSP processor this results in the <<16 but for a 16 bit processor it results in <<0. The constant values are calculated from the compiler. The (value << 0) is optimized by the compiler, no shift is done because the <<0 is known on compile time. It means, both compilation results are functinally correct and optimized in run time.

Note: The C99 standard does not define the number of bits for each type, only the range. The compl_adaption.h should define the number of bits too.


6 Content of compl_adaption.h

Topic:.complAdapt.compl_adaption.

The compl_adaption.h header file should contain all definitions to work with C at user level without knowledge and consideration of the target system and compiler specialities. Note: The application specific mommitments should not be part of that file. That is contained in the applstdef_emC.h instead, see TODO.


6.1 int types with fix bit width

Topic:.complAdapt.compl_adaption..

The compl_adaption.h header should define the following things proper for the compiler situation and the situation of other system includes.

All types should be defined using a #define-statement, not using a typedef. The reason is: Sometimes (especially for the operation-system-adaption layer) other header-files should be included, which defines the same identifier in a adequate way (compatible for usage in compiling) but incompatible while compiling the definitions itself. If the first-included <os_types_def.h> defines the types with #define, an #undef statement can be written before including the other necessary header-files. But ff a typedef is used in <os_types_def.h>, the difference can only be resolved by changing the other headerfiles (remove the unecessary definitions). But the other included headerfiles are originals, which should not be changed often. Typical it may be necessary to write:

#include <os_types_def.h>
#include "someHeadersOfUser"  //using definition of os_types_def.h
#undef int32
#undef uint32
#undef int16
#undef uint16
#include <specialPlatformHeader.h>  //defines this types in another way but compatible
...implementation using the platformheader.h
...and the someHeadersOfUser including os_types_def.h-properties

This construct is not typical for the application-part of the software. The application parts should not depend on special platform headers. But it is typical for the os-adaption layer and for drivers, which have to be use the <specialPlatformHeader.h>.


6.2 Enhanced common types

Topic:.complAdapt.compl_adaption..

The C-language-standard doesn't define all necessities of types. Independing of the used compiler and options (C/C++), the following types should be present for usage:


6.3 Notification of used compiler and platform

Topic:.complAdapt.compl_adaption..

Two defined labels allows conditional compiling in user-sources. The conditional compiling is not recommended. But if it is necessary or desired, it should be done in a unified schema. The defines are platform- and maybe project-depending. They should be queried only in a positive way (#ifdef) not negative (#ifndef). For usage on Windows with Visual Studio 6, the labels are named:

#define __OS_IS_WINDOWS__
#define __COMPILER_IS_MSC6__

Using that both labels, a special user routine can query for example:

#ifdef __OS_IS_WINDOWS__
  //some statements for simulation ....
#endif

The distinction between os- and compiler label is: Usual the os-platform should be query. Only in special cases the compiler may be query, maybe for specific examination of errors etc.

That labels should not be used to force conditional compilation for commonly problems for example little/big endian, alignment requests etc.


6.4 pragmas to prevent warnings

Topic:.complAdapt.compl_adaption..

In generally, any warning may be a hint to an error. But some warnings are ignorable. If such warnings are switched off, the critical warnings are visible better.

Warnings can be switched off individually by pragmas. The commonly valid pragmas to disable uncritical warnings should be included in the os_types_def.h. But only commonly and uncritical! The os_types_def.h can be adapted individually. In this case an individual setting of warning-pragma for a C-compiling project is possible.

The following example shows some warnings, which are switch off for the microsoft-visual-studio-compiler:

#pragma warning(disable:4100) //unused argument
#pragma warning(disable:4127) //conditional expression is constant
#pragma warning(disable:4214) //nonstandard extension used : bit field types other than int
#pragma warning(disable:4189) //local variable is initialized but not referenced
#pragma warning(disable:4201) //nonstandard extension used : nameless struct/union

6.5 Bit width and endian for the target processor

Topic:.complAdapt.compl_adaption..

That is usual but not valid in any case. Some processor architectures are oriented to full-integer and float numerical information and saving hardware-effort for the memory access. Therefore they address the memory in 32-bit-words for example. In that case character values are not presented effective, but this may not be a problem. But the MemUnit is a int then:

#define MemUnit int

The user can use a MemUnit* pointer for address calculations. Mostly a char* is used instead in user-sources, submitting that a memory-word is a byte. But that is wrong in some cases.

But in the case of an abbreviated MemUnit the number of bytes per MemUnit may be 2 or 4. A Byte is 8 bit always. This constant is necessary to calculate space while interchanging of data for example via a Dual-Port-RAM, where a processor with another memory address-mechanism is the partner.

For 32-bit-architectures it may be possible that an address consists of a 32-bit-address and an additional segment information. In that case a intPTR may need to contain the segment too, it means it needs more than 32 bit. But in most of cases the address can be stored in 32 bit. In that kind it may be possible that a address will be condensed to 32 bit by truncation of (unused) address bits. Special operation are possible to do that. Then the intPTR should present the condensed address for commonly usage.


6.6 OS_PtrValue

Topic:.complAdapt.compl_adaption..

This structure is used to hold a pointer and an associated integer value to return it per value. It should be organized in a kind, that forces the usage of registers for the returned values. Normally, struct data, which are returned per value are copied from the stack in another stack location while execution the return machine instructions, after them they may be copied a second time into its destination struct-variable if the return-value is assigned to any one. The usage of register is much more effective. Because the usage of register may depend on some compiler specialities, the definition of this base struct is organized in this header. Frequently the definition of this struct is equal like shown in the example. But sometimes special constructs may be necessary.

The struct is defined as (pattern, frequently form)

typedef struct OS_PtrValue_t
{ char* ptr__;
  int32 value__;
}OS_PtrValue;

The pointer may be a void* in theory, but a char* allows to visit a referenced string while debugging. It may be opportune too to write

typedef struct OS_PtrValue_t
{ union{ char* c; int32Array* a;} ptr__;
  int32 value__;
}OS_PtrValue;

to see a int-array while debugging. - It may be able to adjust, which int-type is stored and in which form the pointer is stored (segmentation? see intPTR). Especially for simple 16-bit-Processors a proper definition should be find out.

There are defined some macros to access values and build constans:

/NOTE: use a local variable to prevent twice call if SRC is a complex expression.


7 Content of os_types_def_common.h

Topic:.complAdapt.content.

The header-file <os_types_def_common.h> should be included normally in <os_types_def.h> It contains definitions, which are valid and proper for all operation systems and compiler variants, but necessary respectively recommended at low level programming in C and C++. The OSAL-source package contains a version of this header-file for commonly usage. But it is possible for special requirements to adjusts nevertheless some properties, by including a changed variant of this file which is contained in the user's source-space. As a rule, the originally version should be used.


7.1 Specials for C++: extern "C" - usage

Topic:.complAdapt.content..

Generally, all sources should be use-able both for C and C++ compiling. It is an advantage that that programming languages are slightly compatible. The extern "C" -expression allows the usage of C-compiled library-parts in a C++-environment. But the extern "C" expression is understand only in C++-compiling. Usual headers of C-like-functions are encapsulated in

#ifdef __cplusplus__
extern "C" {
#endif
//...the definitions of this header
#ifdef __cplusplus__
}  //extern "C"
#endif

This form allows the usage of the same header for C-compiling, without activating of this definition, and for C++-compiling. It is also proper to write a extern "C" to any extern declaration. In some cases a extern "C"-declaration is helpfull in C++, but in C there shouldn't be a extern instead. For example typedef can be designated with extern "C" for the C++-compilation to designate the type as C-type. But it can't be replaced by a extern in C.

The effect of extern "C" is: Labels for linking are build in the simply C-manner: Functions are designated with its simple name, usual with a prefix-_. The label doesn't depend on the function-signature (argument types). The same effect is taken for forward-declared variables. In opposite, in C++ the labels for linking are build implicitly with some additional type information, argument types for functions respectively methods, const or volatile information for variables etc. It is a advantage in C++, that the labels for linking contains some additional information about the element, not only the simple name. Therefore incompatibilities are able to detect in link time. But this advantage prevents the compatibility to C, and it is more difficult to correct errors, which are checked more strong than necessary in C++. Therfore a extern "C"-declaration in C++ makes sense in some cases.

To support a simple usage of extern "C" in sources, which are used both for C and C++, the following macros are declared:


8 Which content should not contain in os_types_def.h

Topic:.complAdapt.noContent.

Because the <os_types_def.h> is included in any C-file, some definitions which are used as basics for an application are inclined to find entrance in this file. But the effect or disadvantage is: The <os_types_def.h> is not more a file for the platform and compiler but for the application. It contains to much different thinks. Therefore a re-using for other applications with the adequate platform is aggravated. Therefore:


9 Content of applstdef_emC.h

Topic:.applstdef.content.

The


#inclue <applstdef_emC.h>

should be the first include line of any header file and therewith the first include for any source file too. Therewith the behavior of the application is determined with platform independing sources.

The applstdef_emC.h includes the compl_adaption.h in its first lines. Both files are existend more as one time in different directories for the different target systems and applications.


9.1 General settings

Topic:.applstdef.content..

Some compiler switches are defined here. They determine general behavior.

Using of reflection:

/**With this compiler switch the reflection should not ... */
#define __DONOTUSE_REFLECTION__

The Reflection mechanism can be used in general. But for specific platforms reflection should not be used, because they need some memory space and String processing. In the user sources with this compiler switch reflections can be deselected for compilation.

Using of C++ parts of several sources:

/**The compiler switch __CPLUSPLUSJcpp should set only ...*/
//#define __CPLUSPLUSJcpp

This switch should be set (uncomment it) only if the C++ parts of sources, which offers C and C++, are used in the application. The application is a C++ source then. Especially it is for Java2C-generated sources, but usefull for other too. If the switch is not set, the C++ parts of some sources which are guarded with this compiler switch are prohibited.


9.2 include compl_adaption.h

Topic:.applstdef.content..

In this order now

  • include <compl_adaption.h>

  • include <OSAL/os_types_def_common.h>

  • are included yet. The switches above can be used in that files.


    9.3 Using the ObjectJc head data or not, using StringJc capabilities

    Topic:.applstdef.content..

    The application will be more simple if more complex String processing capabilities and the possible super class of all data should not be used. This is for small footprint applications. If the application sources need that capabilities, the following lines should be commented.

    **Including this file the ObjectJc.h is not included, */

  • include <source/FwConv_h/ObjectJc_simple.h>

  • If that header is included, the struct ObjectJc is defined as simple struct with only one int32 element, the ident-number of data. Therewith Reflection cannot be used and virtual operations cannot be used. See content of that file. The invocation of initReflection_ObjectJc(... reflection_...) is possible, but the forward declared reflection instance is not used. Hence it should not be existing on link time. It implies the setting of the compiler switches __DONOTUSE_REFLECTION__ and __NoCharSeqJcCapabilities__.

    /**Define __NoCharSeqJcCapabilities__ only for simple systems ... */
    #define __NoCharSeqJcCapabilities__
    

    That compilerswitch controls the compilation especially of parts of the file fw_String.c. A CharSeq can provide a String as sequence of chars with overridden (dynamic linked) operations charAt(obj, pos) and length(obj). For that capability some more code should be present on linktime, especially the access to the virtual table of the instance. Setting this compilerswitch the parts of code using that are excluded from compilation, therefore there are not necessary on linktime. For the application it means that the feature of vitual access to CharSeq operations cannot be used. It is proper for simple systems without complex String functinality. See TODO StringJc.


    9.4 Assertion and Exception

    Topic:.applstdef.content..

    In C++ (and Java, C# etc) languages the concept of try-catch-throw is established (Exception handling). This is a some more better concept then testing return values of each calling routine or testing errno like it is usual in C. The advantage of try-catch-throw is: In normal programming the regarding of all possible error situations in all levels of operations are not need. Only in that source where an error should be tested by algorithm demands it should be regarded anyway and invokes a throw. And, on the opposite side, in sources where any error in any deeper level may be expected and should be handled by the algorithm demands, a try and catch is proper to write.

    The necessity of handling error return values in C will be often staggered to a later time, because the core algorithm should be fistly programmed and tested. Then, in later time, the consideration of all possible error situations is too much effort to program, it won't be done considering the time line for development ...

    Therefore the try-catch-throw concept is helpfull.

    The emC programming style knows three levels of using TRY-CATCH-THROW using macros. The user sources itself are not need to adapt for this levels. The macros are adapted. See exception todo.

    The last statement ret < 0 is only executed in the non try-catch-throw mode. It is additonally for that but not essential. It may be not necessary if the conditions to run are detected in another way.

    //called level:
    int excutesub(...) {
      if(arrayIndex > sizeArray || arrayIndex < 0) {
        THROW(IndexOutOfBoundsException, "illegal index", arrayIndex, -1);
      }
      myArray[arrayIndex] = ... //access only with correct index!
    

    The THROW statement either invokes throw of C++, longjmp or a log entry and return.

    The applstdef_emC.h controls for the application, how the TRY-CATCH-THROW is used:

    #include <Fwc/fw_threadContext.h>
    //#include <Fwc/fw_Exception.h>
    #include <Fwc/fw_ExcStacktrcNo.h>
    

    The ThreadContext is a concept of both, protocol the stack levels and provide a thread local memory area. For the exception handling fw_threadContext.h is necessarry for the Stacktrace. In cause of

    #include <incApplSpecific/applConv/assert_simpleStop.h>