Note: The strategy for the src tree was changed in 2022/07, this is the new description using src/cmpX/…​ instead src/main/…​

date: 2022-08-03:

1. The idea of a common structure of files for working directories

The following idea is gotten from maven:

It is not recommended from me to use maven or gradle (wikipedia), this is another decision as the decision for this file tree. But the idea of maven file tree structure is proper to use, also outside maven or gradle using.

The basic for this idea of a well defined file tree is "convention for configuration". That was one of the important step from the older tool ANT from appache.org towards to maven. In ANT, or sometimes in simple make systems, or in software at all, there is no rule which files are stored where.

But the idea of "convention for configuration" is not the reason for the here presented approach. configuration is not so hard to create. The more important reason is, having a well order of files, especially also on using of several components. Primary the maven file tree divides in the sources for the application itself (main) in test sources (test) and also in docs . Furthermore it defines where built file are stored (build), and some libs. That is practical. Here some more additional ideas in this direction are presented, more consequently as in maven or gradle.

The question how to deal with components (see next chapter How to separate components) is also a primary approach for this file tree. It is solved other than in maven or gradle.

1.1. Sources in the working tree

The basic idea can be explained here as following, also considerate the idea how to deal with components, other than in maven:

Table 1. basic maven file tree

tree

src

The first level of sources in a working area or "sandbox" is always src

src/main

The second level is main for the sources of the application itself. This is originally maven.

src/test

The second level is test for test sources, only used for test, not for the product itself, also as in maven

src/main/java

The third level describes the kind of sources, for maven/gradle often Java sources.

src/test/java

Same strategy for src/test, kind of sources

src/doc/docx

Documents should be parallel to test and src.

1.2. Sources of different components in the working tree

Now, abbreviating from the maven idea, instead the anonymous main the name of the component is written. This enables to have some component sources parallel. See also next chapter How to separate components

Table 2. file tree for more as one component

tree

src/cmpnX

The 'main' sources of one component.

src/cmpnX_test

Also the test of a component is handled as a component in the adequate kind.

src/cmpnX/java/cmpnName

The 3th level follows the maven approach, the sources are sorted to the programming language

src/cmpnX/cpp

As also for the C and C++ language, both marked with cpp. This is familiar existing.

src/cmpnX/asciidoc

This contains the documentation for the component related to the sources. Especially for https://asciidoctor.org/ or https://asciidoc.org/ the source of documentation should be close to the sources of the programs, because it can be simple included.

src/cmpnX/.git

Using a component specific src directory is the basic idea for the repository of the sources. Other than often seen where the directory root for the git tree is parallel to the src directory, here the git is related to the component. The .git can be the directory for the repository itself, or also a link to a directory where the repository is. See ../../Git/html/GitUsage.html#gitdir

You can mount a components source directory via a symbolic link. Then you can have the same component with the same currently (!) source version parallel in more as one working trees, or you can participate in one working tree from the current version anywhere other. This is often a proper approach.

As you see in the table above, the repository for a version system (here .git) has its root in this component tree, the only sources of this only component are part of this version management. See also chapter separated git repository for each component

Table 3. version management roots in the file tree for more as one component

tree

.git src/.git

The root of version management not for the sources, but for this working tree. The working tree may map to the test of sources, or to a application consist of many components.

src/cmpnX_test

The test sources may be part of the version management on the root

src/cmpnX_test/.git

Or the test sources have its own version management, because they are elaborately and reused in more applications.

src/cmpnX/.git

root of version management for the component sources

src/cmpnX/script_UsedCmpn/mklink_UsedCmpn.bat
src/cmpnX/script_UsedCmpn/clone_UsedCmpn.sh src/cmpnX/script_UsedCmpn/load_UsedLibs.sh

The question is, from where get the other components, if they are not part of the version management on the root. The question should answered related to the components. Each component may need specific other ones. But the components should be placed all in the here shown file tree structure. The components similar used should have all the same version.

src/cmpnY/.git src/cmpnY/script_UsedCmpn/mklink_UsedCmpn.bat src/cmpnY/script_UsedCmpn/clone_UsedCmpn.sh src/cmpnY/script_UsedCmpn/load_UsedLibs.sh

Generally, if the components uses the same other components or libs, they are also existing by loading from cmpnX if cmpnY will load it. If the versions are different, it should be detected in the load scripts, and clarified by the owner of the application (the whole src working tree).

See also chapter Git repository for the whole src tree. The relation between more as one git is made manually, not with the git-intrinsic multi-git-management. The reason is: There is more flexibility. A sub-git (a necessary module) can be created with calling the clone_UsedCmpn.sh script. But also, the proper file tree of the component can be linked, with a always existing git, or also as copy from a zip file or what ever. The decision for that can be done by the user in a specific way specific for this working tree. The scripts support it, they are not executed automatically.

Note: the .git can be the directory for the repository itself, or also a link to a directory where the repository is. See ../../Git/html/GitUsage.html#gitdir

1.3. Distinction of kind of sources in the 3^th level

Some more entries for th 3th level:

Table 4. file tree 3th level

tree

src/*/resources

The third level with another kind of sources, here so named resource files for Java. This is true of course for all src/main/…​, src/test/…​ and all others.

src/*/smlk

What is smlk for a kind of sources? - Yes, it is Simulink, also often used.

src/*/mySpecificLang

Either a language have a usual known mnemonic, or you find a shorten. This system of language in the first level is favored for the gradle tree.

src/*/asciidoc

Of course here also the third level for the language or kind of docu. You can also introduce src/*/docs fow word documents. But you should regard that are not textual files, the versioning is not so proper. It means it should be stored outside of a git and zip file tree, hence in src/docs

src/IDE/MSVS/appl

This are the folder for that IDE files (Integrated Development Environment) which are also located on the IDE directory see below. As in all other src directories the 3th level should be name the platform, for this example 'Microsoft Visual Studio' or such one as CCS (Code Composer Studio), ecl for 'Eclipse' etc.

src/*/IDE/

If the IDE files are specific for example for a test, they can be localize also in a components (test component) sub directory. But regard, that are the copy of the IDE files, not the working tree of the IDE.

All in all, the really sources are all located in src. It is often only less, 1..10 MByte. The src subdir must not contain temporary files and also not Megabyte of content which can also be found on dedicated locations in the internet. But the links to this locations should be part of the sources.

Only content in src should be versioned.

It should be taken into consideration that the whole src tree may be also stored and exchange, often as zip file. This is a second approach parallel to a versions history. Both have its understandment. The versions history is usual important to trace the development, whereby free bundled sources files of a dedicated version are proper for delivering and comparison of main versions.

Table 5. backup parallel to src tree

tree

src_back

This directory can be used to store some zip content of the whole src or specific components. Use always a time stamp in the writing style yyyy-mm-dd and a unique name of such zips. Storing zips is not against versioning, it completes the idea of versioning.

1.4. Other parts than sources in the working tree

Furthermore, some common tools and also libs for organization should be placed in the working tree, as also temporaries:

Table 6. file tree common tools and libs

tree

libs

This can be used as an directory, where external Libraries are loaded which can be found in the internet. Such an directory is sometimes able to found, and it is sensible. But in maven external libraries are often also stored in a systems directory. See Libs and tools on the source tree

src/load_tools

See Libs and tools on the source tree. This directory contains only the not files which organzizes loading of tools from internet. Not the loaded files.

tools

This directory contains tools for working, which are usual only simple jar files , batch files or shell scripts. Some of this files are delivered with a given version of an application, but some other can be reloaded from an internet archive too. See chapter Libs and tools on the source tree

IDE

This directory contains in sub dirs files for projects for an IDE = Integrated Development Environment. See especially chapter Directories for the IDE beside src

build

A directory where build outputs where written. This directory should be cleaned if a 'build all' should be done. It can refer via symbolic link also to a temporary location maybe especially on a RAM disk.

2. How to separate components

2.1. comparison with Java package path

From the view of maven or gradle with Java using orientation, the topic of components is clarified by Java itself:

Java knows a strong package structure, valid over the whole world. Any source file is well sorted in in this package structure. No conflicts exists, and all Java developer regard this approach.

What is it for an approach:

The package-path starts usual and accepted from all with the URL internet page address of the responsible company or with a commonly accepted name.

So, the Java core sources have the package path java/lang/…​, java/util/…​ Special Java sources start for example with sun/…​ or com/sun if they were created in the past from the Sun company. Other ones start with oracle or more consequently with com/oracle because they are written from a company which can be found in the internet with oracle.com. Sources in the package path org/w3c are from www.w3c.org etc. pp My own sources starts with org/vishia/.

So, in a Java source tree, all components can be mixed without conflicts. Of course the further entries in the package path are well sorted, in responsibility to the company which determines the start of the path. The company should be handle consequently, and confusions with other companies are excluded. There is no company which starts its own sources for example with com/oracle or (I hope so) with org/vishia.

2.2. What is a component

This is sometimes not well defined.

A software or some other technical things consist of modules. A module is an own described and testable unit which can be used in different kinds. A source file or some associated source files with maybe dependencies one another build a module.

An common understandment is, that a component is a greater unit than modules.

A component is either an assembly of modules that form an independent greater unit - or special in software it is an assembly of modules which are delivered together. All modules in a component have the same version and they are tuned together.

This is a sensible definition. Following this, a component has one repository (for example in git), with its versions.

Of course sub components can be defined. Each (sub) component has its repository, and a component consting of sub components has a repository with child dependencies.

For the view of Java: A jar file is a deliviring form of a component maybe consisting of sub components. You should find all the sources of one jar file in one repository. A jar file and its sources has a defined version.

From the view of C/++ developing: Some C or C++ sources which are commit together should be seen as a component. This is a version, with a responsible, and maybe in the delivering form of a library with header files, or even as source pool.

For other languages or also hardware description files it is adequate.

2.3. separated git repository for each component

That is an interesting question.

As also shown in the chapter Sources of different components in the working tree, the .git is related to components. The components should be distinguish in the 2^th level in the 'src\componentX' tree.

src/AComponent
     +-.git
     +-java
     |  +-companies/package/path
     +-resources
src/BComponent
     +-.git
     +-cpp
     |  +-internal/src/tree
     |  +-include ... not recommended
     +-asciidoc/...
     +-some_more/...
     +-lib

For the include directory see also remarks in chapter Separate source and include directories or arrange header beside sources

The sources of the components are independent of the maven file tree idea and they are also independent of any names (which may not have been reconciled).

This is the best way for separation. This may be also valid for Java, though Java sources are already well separated by the package tree.

For example usual in my Java sources I have the structure:

src
 +--java_vishiaBase/
 |   +-.git
 |   +-java
 |      +-org/vishia/...
 +--java_vishiaGui/
 |   +-.git
 |   +-java
 |      +-org/vishia/...

Both components have an additional directory between java/ and the Java package tree. That is not mentioned for maven. Maven suggest src/main/java/org/…​ without a component sub directory. Why?

Maven has another concept for components. Usual it uses complete jar files, which have the component structure as jar file, and this jar files are not part of the sources in your own source tree. Often their are stored in a temporary folder (C:\users\myName\p2\pool\plugins\org.apache…​jar) and updated automatically from internet on demand. This is the maven approach, get all over the internet from the world. But this approach is not proper for all. In particular, the questions "which version is used", "are all impacts considered" and some more are not necessarily sufficiently clarified with such an approach. The second drawback is an overly unexplained dependency on the Internet. But exactly the approach "everything can be found in the world" is the core approach of maven. Maven is oriented to large software packages - no limitations, hard disks have enough space. This one is not recommended by me! And also often not proper for embedded control. But the maven file tree is a proper idea.

2.4. Git repository for the whole src tree

The .git in the component’s directory is for the component’s version. Whereas a .git in the whole src or working tree is for the application or test environment.

MyWorkingTreeForWhatEver
 +-.git
 +-src
 |  +-cpmnX
 |  |  +-.git
 |  +-cpmnY
 |  |  +-/.git
 +-IDE/...
 +-tools
 +-build
 +-someScriptfiles.sh.bat

or adequate

MyWorkingTreeForWhatEver
 +-src
 |  +-.git
 |  +-cpmnX
 |  |  +-.git
 |  +-cpmnY
 |  |  +-/.git
 +-IDE/...
 +-tools
 +-build
 +-someScriptfiles.sh.bat

The second form emphasizes the idea that only sources should be versioned. The content of build is by the way temporary, should never be versioned. But from where comes the tools, the content of IDE and some scripts. The answer is following:

  • The tools and also libs should be loaded from a trusted source in the internet, by the way. It is a question of versioning and also save memory in all application sources to do so. The scripts to load the tools and libs are part of the component’s sources, see Sources of different components in the working tree. There is mentioned a src/cmpnX/script_UsedCmpn/load_UsedLibs.sh script. Also in the chapter Other parts than sources in the working tree there is mentioned a src/load_tools/* directory with a bill of material list and some scripts.

The repository (or other version tool) beside the sources should contain all stuff which is necessary for the application or test environment additonal to the component’s sources,

and

should refer to load files of the components, at least the primary ones. Because the components contains the src/cmpnX/script_UsedCmpn/*, further dependencies are clarified with it.

Note that you can have a .git directory inside that sources:

src/component/.git/TheRepository_itself

or you can have a .git file which refers the repository on another location:

src/component/.git
.git contains:
gitdir: path/to/repository/.git

The second one it preferred by me, because two reason:

  • On manual archiving (create a zip) you have not all stuff of the repositories in the zip

  • You have a mirror location which make it easier to compare, change, have experience.

2.5. Using the same component sources in different working trees

Sometimes one working tree is for one application, the other one is for elaborately tests, and a third working tree or Sandbox is for another application.

Then sometimes it should be clarified that all application should use exactly the same sources (test with the same sources should be clear, another application should be and proper behavior to the first one).

There are three ways to get it:

  • a) All components should be written in only one working tree. This is possible, the files should not be confusing because of the proper inner file tree. But the disadvantage: If you want to clarify, which sources are used, or deliver the sources with a zip file, it is too much. All applications are in one. Maybe also the overview about the used files is lost.

  • b) Each component (or specific determined components) should have its own working tree. That is also proper if the maintainer of the working trees are different persons, on different computers. Then, the content should be tuned, should be the same.

    • For that either all working trees uses the same version form a central repository (in network),

    • or the working trees are tuned by comparison of the file content.

  • …​ A diff viewer for files in a directory tree works usual fast. The presumption is: The directories should be present either in the same network, or, also possible, a interchanged data storage is used. ("Please give me your files on a stick, I compare it").

    This opportunity is also able to use for temporary or for experience different versions.

  • c) Each component (or specific determined components) have its own working tree, but the directories of the used sub components are symbolic links (known from UNIX, but also able to use in Windows with mklink /J linkname path/to/src).

    Then, similar as a) changing of the source in one working tree (for one application) offers the changed sources immediately for the other application. Often this effect is desired. One is tested, changed, then without additional effort the other application or test application is tested with this same changes.

    The ability c) helps to sort the files (better than a), and gives an opportunity to switch some components to the b) approach, and back to c), only by changing the link, instead copy or locally checkout another version, all what is necessary.

    It means this is often the best approach.

Example:

D:/software/workingTreeA/src/compn_A/cpp/filex.cpp
                               +
                               +-----------+
                                           +
N:/networkdrive/software/workingTreeB/src//compn_A/cpp/src/filex.cpp

compnA should either a copy, compared, or a linked directory. Changing in D:/software/workingTreeA/ can force an immediately change in the other N:/networkdrive/software/workingTreeB without effort. For example another person can immediately test, or you start an remote test. Note, for a network drive on windows you should use mklink /D, a simple Junction does not work. But: A Junction inside the same hard disk is proper seen from another PC in network.

  • mklink /J name targetdirpath creates a so named Junction, which works only for a targetdirpath on the same hard disk but it is seen proper in the network.

  • mklink /D name targetdirpath creates a symbolic link, also able to any other drive such as also a network drive. This is the adequate to a symbolic link in UNIX. This command needs administrator rights to execute.

  • mklink /H name targetfilepath creates a file link, so named 'hard link' similar as UNIX, whereby the targetfilepath need to be on the same drive.

Testing approach using git test abilities:

Of course, such scenarios are also offered in a Git test environment. But then all changes should first be committed without testing for the test, not the best approach in all cases where minor changes should be checked for compatibility.

The core of this question is outside of the source tree organization question, but some details are related.

2.6. How to get interfaces to a binary given library

There are two systems, the system of C and C++, and the system of all other languages.

2.6.1. Common solution: interface as part of the binary

Lets present this in Java. The class file (the compilation result) contains the signature of public data and all operations of the appropriate class. If a user file is compiled and uses a precompiled class in any library, there is a

import package.path.to.UsedClass;

The compiler sees this statement, has a search path to all libraries, searches the appropriate pre compiled class and knows how fields and operations should be used.

If the class is part of the sources to compile, the compiler knows all sources to compile as a whole (other than in C/++ where only one compilation unit is handled in one step). It means the compiler (javac) can check whether this class is part of the source tree. This is also supported from the fact that Java has a strong rule: A public class must organize in a file.java with exact this class name, and all files are assembled due to the package path. This enables the javac compiler to immediately check whether the class is presented as source file. If it is found, the first pass of the compilation is done for this used class, while the compilation of the using class is continued. It is really a powerful simple organization based on the file and package path rules. Only if the appropriate source file is not found, it will be searched in all given libraries.

More as that. You can have a used class in one version while compiling, but present the same class in another version on runtime. Both is admissible if the other version of the class is compatible. This is detected before run time, while a class is loaded and compiled to the machine code. The linkage pass is done in Java before start of a program (so named JIT, just in time compiler).

The interface concept in Java increases this flexibility even more. To compile time only the interface should be present as file. The implementation (class file) can vary.

2.6.2. C / C++

The system in C/++ is more simple. The header file, should be present as source while compilation, describes the interface to the implementation. Here the wording 'interface' is meant commonly, independent of the interface concept in the language. A library must be translated with the same header file, or at least with a header file with the appropriate content, and (!be attention!) with the same compilation guides (compiler flags, pragmas). In C language nothing is checked in linkage time (for example different packing strategies of non bounded data elements, or also type of arguments of functions). The result may be a insufficient non obvious behavior, or better for the user, a consequently crash. But this is not good for the developer or the software maintainer.

C++ checks more, which results then in sometimes obscure (for newcomer) linkage errors. But better for the whole software.

Thats why (a little conclusion) I use Java instead C/++ for non embedded applications.

2.6.3. Separate source and include directories or arrange header beside sources

The separation in source and include sub directories is very familiar. But the sources and their header files belong together. Do not separate what is related! The idea to separate header files in a include directory comes prior from UNIX whereby the sources are not present, instead delivered as library (lib) and the include describes the source given interface to the libraries. Secondly the include and the lib is used in different compilation phases, the first for compilation, the second for linking. Whether the include is real proper to the lib, this is only clarified by a common version, or perhaps by chance and sluggishness, not else. Elsewhere it may be better to arrange the include to the lib. But in the old time the whole unix system with its C compiler was delivered from one source in a proper way. Also, the open source idea was invented only in the 80^th with the "GNU NOT UNIX" slogan.

For C/++ development for embedded applications the libraries have a lesser meaning. Usual the whole software is compiled as a whole, also for variation of some implementation conditions (optimizing rules, association to RAM or ROM, maybe some conditional compilation etc.). My point is, you should not separate header and implementation files. Store it side beside in the same source directory. Also if a component is delivered by a library (to save compile time), the sources are a point of interest and should be also delivered with that pre compiled library, beside the header.

3. Libs and tools on the source tree

3.1. Depending parts loaded from internet

A common approach also used for such systems as gradle and maven is: Some necessary libraries and tool files are loaded on a central position on the PC. This is the home directory of the user ('C:\Users\name…​' in Windows). Then there are accessible. For example on Eclipse usage you find:

C:\users\myName\p2\pool\plugins\....

One of the idea is, the libs and tool files have a unique version name, the same as worldwide in the internet, in the adequate (maven…​) repositories. The local existence on the PC is only a mirror for fast and internet free access.

There is a principle discussion: Should a file with a versioned name to be used, or should the file have the simple name, and it is gotten from a version archive?

  • a) The first case, versioned name, enables to have more version parallel and central. This is the maven approach. An application should use the file with the versioned name. For the maven build system this is clarified.

  • But if another non-maven application should also use this file, it should know the versioned name.

  • b) The second case, unique name as archive member, requests that the de facto versioned file is stored in a locally relation to the using application. It can be seen as disadvantage, that the same file in different versions should be stored in different folders, and maybe sometime the same version is also stored more as one time in different folders, which needs space on hard disk.

  • But the advantage for the localized usage is: It accesses the file with the simple known name.

Maven/Gradle has decided to a). But this has also some more disadvantages:

  • An application is not closed on a working space, it needs also some parts in other directories. This is not obvious. For example if an application is started with another user name, it does not find the libs.

  • If you deliver a version outside of internet usage as so named "copy deployment", a simple bundle of files for example as zip, it is not complete.

  • You have often not an overview which libs and tools are used, which are present, are they currently or used …​.

  • The amount of libraries and tool files seems to be near unlimited. Too much. Loaded libs and tools from older versions remain loaded on hard disk.

Of course the pure maven approach includes the possibility of cleaning all, and reload only new necessary files from internet. It is a internet driven approach.

The approach b) means, that an application is closed containing on a directory tree, with all necessities.

  • The files can be loaded one time from internet, then there are locally findable. Of course cleaning and reloading is possible, but often not necessary.

  • Disadvantage: If you have more applications which uses exact the same version, this files are existing more as one time on the hard disk.

  • But for related tools you can have also a symbolic linked directory ( in Windows often a so named Junction mklink /J NAME to/dir is possible.

  • It may be seen as advantage, that an update of the libs and tools with a new version is effective immediately without any effort. The application uses the same file with the same name, in the newer version. If a new version is tested, and it is compatible, this is not an disadvantage, more, it is adequate the DevOp apporach (Development and Operation).

  • If it seems to be that the newer version may be faulty, or only a comparison with the older version should be done, it is only necessary to intermediate store the other version, maybe with temporary broken symbolic link only for the test. It is very simple to do this change.

To support a closed directory tree the approach b) is better and here recommended.

3.2. libs and tools content only load one time, associated to the working tree

Hence in my work on a working file tree I prefer a directories libs and also tools beside src and all other.

  • libs should contain loaded libraries, for Java approach this are especially .jar files. For C/++ usage this can be also pre-compiled libraries.

  • tools contains also often .jar files, but not to use for compilation. There are small tools to work.

    This tools directory does not contain the elaborately files for example for IDEs (Integrated Development Environments) in sizes of Gigabyte. It should only contain small tools in less MByte size and maybe only some shell scripts of batches.

To avoid using too much space, there is a possibility using links, see also Chapter Save the project files in src/main/IDE where links are also used and explained.

The tool files are usual able to found with proper versions in internet archives. Also the sources should be found beside this archives in internet, as best case using a Reproducible build approach, see vishia.org/Java/html/source+build/reproducibleJar.html.

So on delivering a source tree in this form for example in an zip archive this files need not be a part of them. They can be loaded from internet, only one time on creation or unpack of this source file tree. After them they are stable present, associated to the application, without conflicts to other application and independent of the internet.

To load this archives from internet a small tools/minisys_vishia.jar is used as part in the git archive as only one common. It contains the necessary GetWebfile class.

Wget as known linux cmd is not available unfortunately in a standard MinGW installation, neither it is anyway a standard on any Linux System. Hence it is provided with the minisys_vishia.jar for all systems where Java runs. But minisys_vishia.jar does more.

java -cp tools/vishiaMinisys.jar ...
  org.vishia.minisys.GetWebfile ...
  @tools/bomVishiaJava.txt tools/

(…​ is for line continue).

3.3. Using a bom (bill of material) for which tool files in which version

The bomVishiaJava.txt contains the re-check of the vishiaMinisys.jar, and check and download of vishiaBase.jar and vishiaGui.jar. The bom contains MD5 checksums. With it the already existing vishiaMinisys.jar is checked whether the checksum is okay. It it is not so, a warning is outputted. The other files are loaded and checked (whether the download is correct). If there are existing (on repeated call), the MD5 checksum is build and compared. The MD5 checksum is noted in this archive. Hence it is not possible (with the safety of MD5) to violate the files all on server, downlaod process and on the own PC.

The next importance is: It is documented which files are used from where. Other systems loads some downloaded stuff in a home directory (C:\Users... on Windows), not simple obviously which and from where. And the third importance is: The sources of this jar files are stored beside the jar file at the server. The jar files can be build reproducible (see https://www.vishia.org/Java/html5/source+build/reproducibleJar.html).

  • The tools/vishiaBase.jar is a Java executable archive (class files) with about 1.2 MByte, which contains especially the JZtxtcmd script interpreter. That is used to generate the test scripts and for Reflection generation (further usage of sources). It is a necessary component. This file is downloaded from a given URL in internet. If necessary you can find the sources to this jar file beside the jar file in the same remote directory. With the sources you can step debugging the tools for example using the Eclipse IDE https://www.eclipse.org.

  • The tools/vishiaGui.jar as Java archive contains the ability to execute the SimSelect GUI which is used in src/test/ZmakeGcc/All_Test/test_Selection.jzT.cmd to build and executed specific test cases. It also contains some other classes for example for the 'inspector' or the 'file commander'

4. build beside src

In the maven or gradle approach beside src there should be a build:

 +-src
 |  +-main
 |  +-test .....
 +-build

The build is the destination folder for all built results, also for the end-used executable. The executable (the last result of built) can be copied from there to a delivery directory.

The content of build should be seen anyway as temporary. The build process can be repeated any time and should be repeated in a 'clean all & rebuild' approach.

It may be recommended to use a RAM disk for this build organized with a symbolic link with the advantage that writing and the build process runs faster and the hard disk is not used for too much temporary stuff.

If you want to get a zip archive from the sources, you should not include the build, zip only the src directory.

5. Directories for the IDE beside src

This idea is not the basis for the Maven or Gradle approach. Maven or Gradle can be seen also as a build system outside of an IDE

IDE = Integrated Development Environment such as Eclipse, Visual Studio or a specific IDE for embedded software.

Firstly, the IDE project files can be seen as part of the sources. Then it should be stored below src/main/IDE. But that has an disadvantage:

Often beside the project files of the IDE the temporary directories are created. If they are inside src\main and you make a fast zip backup with the sources, the difference is: You may have only kByte or less MByte for the sources, and 100.. MByte with the sources and the temporaries of the IDE. That’s worse.

Separation of the IDE solves this problem.

 +-src
 |  +-main
 |  +-test .....
 +-build
 +-IDE
    +-Platform_A
    +-Platform_B
    +-Test

As you see in the example tree you can have more as one IDE files for different platforms, for test, and maybe also different applications.

The IDE project files should refer the source files always starting with a back path (../../src/…​.) but that is not a problem.

5.1. Save the project files in src/main/IDE

As described in the chapter above the IDE files should be not part of the src tree, they should be separated in ÌDE beside the src.

But: Some of the IDE files should be strored in the software version, without additional effort. How to do?

There is a solution using hard links. What is a hard link?

A hard link is known in the UNIX world since ~1970. It means the same file content is available from different directories. In UNIX (or LINUX) you can create a hard link to an existing file with the cmd

ln path/to/existingfile linkedfile

Then the same file content is available also in path/to/existingfile as in linkedfile from view of the current directory.

Such hard linked files are a little bit worse understandable by normal users. Hence this system was not available in the first Windows versions and not in DOS: But currently, windows supports hard links:

mklink /H linkedfile path/to/existingfile

It is the same as in UNIX/Linux but of course with other order of arguments :-(

Look on the following example: For CodeComposerStudio (Eclipse based) the project files are stored in src/main/IDE/MSP430/TimeIntr. The files there are stored as versioned.

src/main/IDE/MSP430/TimeIntr
                     +- +createWorkPrj.bat
                     +- HlinkFiles
                         +- +clean.bat
                         +- +clean_mklinkDebug.bat
                         +- .ccsproject
                         +- .cproject
                         +- .project
                         +- lnk_msp430fr4133.cmd
                         +- targetConfigs\

The script +createWorkPrj.bat copies that files as hard link:

echo off
set NAMEWS=IDE\MSP430\TimerIntr
cd %~d0%~p0\..\..\..\..\..
echo cleans and creates a directory %NAMEWS% beside the src tree
echo as Workspace for the Project.
echo the Workspace can always removed again, contains only temp files.
echo All real sources are linked to the src tree beside.
if not exist src\main\%NAMEWS%\+createWorkPrj.bat (
  echo ERROR not found: src\main\%NAMEWS%\+createWorkPrj.bat: faulty path
  cd
  if not "%1" == "NOPAUSE" pause
  exit /B
)
if exist %NAMEWS% (
  echo WARNING exists: %CD%\%NAMEWS%
  echo will be deleted, press abort ctrl-C to exit
  if not "%1" == "NOPAUSE" pause
  rmdir /S/Q %NAMEWS%
)
mkdir %NAMEWS%
cd %NAMEWS%
echo creates a so named hard link, the files are the same as in this original directory
mklink /H .cproject ..\..\..\src\main\%NAMEWS%\HlinkFiles\.cproject
mklink /H .ccsproject ..\..\..\src\main\%NAMEWS%\HlinkFiles\.ccsproject
mklink /H .project ..\..\..\src\main\%NAMEWS%\HlinkFiles\.project
mklink /H lnk_msp430fr4133.cmd ..\..\..\src\main\%NAMEWS%\HlinkFiles\lnk_msp430fr4133.cmd
mklink /H +clean_mklinkDebug.bat ..\..\..\src\main\%NAMEWS%\HlinkFiles\+clean_mklinkDebug.bat
mklink /H +clean.bat ..\..\..\src\main\%NAMEWS%\HlinkFiles\+clean.bat
mklink /J targetConfigs ..\..\..\src\main\%NAMEWS%\HlinkFiles\targetConfigs
call +clean_mklinkDebug.bat
dir
if not "%1" == "NOPAUSE" pause

The result is a new created file tree on IDE/MSP430/TimeIntr beside `scr`with

IDE/MSP430/TimeIntr
             +- +clean.bat
             +- +clean_mklinkDebug.bat
             +- .ccsproject
             +- .cproject
             +- .project
             +- lnk_msp430fr4133.cmd
             +- targetConfigs\
             +- Debug\
             +- Release\

The Debug and Release are created from calling +clean_mklinkDebug.bat what is another topic too. The other files are hard links to the versions files in src/main/IDE/MSP430/TimeIntr. If you change the project files, it is automatically changed also in the versioned directory. But all temporary stuff is locally in IDE/…​..

There is a pitfall: If an editor reads the given file, but on save it removes the file and creates a new one with the same name, the hard link relation is broken. You should know your tools. Unfortunately working with hard links may not be accepted by all tools, because it is not so familiar in Windows (known in UNIX since ~1970). The solution for this problem is: Compare the file content from IDE beside src and src/main/IDE/…​ if you have expected changes, and eliminate bad tools.

5.2. Temporary location for build and output directories for IDEs

The content of +clean.bat in the chapter above is:

echo removes %~d0%~p0\Debug etc (*.db, *.sdf, *.user, .vs, x64 etc)

if exist %~d0%~p0\debug rmdir /S/Q %~d0%~p0\Debug
if exist %~d0%~p0\Release rmdir /S/Q %~d0%~p0\Release

it removes the temporary stuff, and supports a 'clean' approach.

The +clean_mklinkDebug.bat creates a symbolic link, a 'junction' in windows for the Debug and Release temporary directories to a %TMP% location. If you have this %TMP% localized in a RAM disk, you save time and burden for your hard disk. For a RAM disk you can use for example http://memory.dataram.com/products-and-services/software/ramdisk. It is recommended if you have enough RAM in your system (>= 8 GByte). Using 1 or 2 GByte as RAM disk is usually enough and sufficient.

echo off
REM next statement: %~d0 is the drive from the calling path, %~p0 is the path from calling path.
REM it changes to the directory where this file is stored.
cd /D %~d0%~p0
call +clean.bat
set DBG=%TMP%\TUI_EmbMC\TimerIntr
if exist %DBG% rmdir /S/Q %DBG%
mkdir %DBG%
mkdir %DBG%\Debug
mkdir %DBG%\Release
if not exist Debug mklink /J Debug %DBG%\Debug
if not exist Release mklink /J Release %DBG%\Release
echo TestDebug >Debug\TestDebug.txt
echo TestRelease >Release\TestRelease.txt
pause

The same can be done for the build directory on root of the working dir:

6. The git repository for a whole application, clone the components repository

If you follow the approaches for components in chapter <#CompnGitRepos> then you have proper versions for all your components. But the versioning of the whole application is missing.

You should not the approach a) in chapter <CompnGitRepos>, not all applications and tests in one working tree. It may be confusing.

If you have a working tree for one component, maybe with test in the src/test tree, you can have a git repository for the whole tree. But this should not contain all files! It should be a parent git (or other version system) which refers to the version of the sub repository.

I’m not favoring a git sub directory tree, it is too much git-oriented and unflexible. For example if you yet only want to have a copy, or sometimes use other version systems than git, or whatever else, it is more flexible to deal with separated repositories. You have the responsibility to adjust all manually, of course, but this is done anyway if you decide about the versions. Please follow the idea in the sub chapter:

6.1. clone batch file as presence for a non loaded component

You can/may/should have one file for each component which clones the repository from a remote location. For git this is (example for the component src_emC:

src/main/cpp
          +- +gitclone_src_emC.sh
          +- src_emC

The +gitclone_src_emC.sh contains:

version="2021-08-31"
dstdir="src_emC"
echo this shell script gets the $dstdir core sources of emC
echo if not exists $dstdir: clone https://github.com/JzHartmut/src_emC.git
cd `dirname $0`  ##script directory as current
if ! test -d $dstdir; then
  ##echo for the present clone the src_emC with tag "$version" as 'detached head':
  git clone https://github.com/JzHartmut/src_emC.git -b $version $dstdir
  ##git clone https://github.com/JzHartmut/src_emC.git srcvishia_emC
  cd $dstdir
  pwd
  echo touch all files with the timestamp in emC.filelist:
  #this file is part of test_emC, hence the .../tools exists:
  java -cp ../../../../tools/vishiaBase.jar org.vishia.util.FileList T -l:emC.filelist -d:.
else
  echo $dstdir already exists, not cloned
fi

You can use the clone with the given version. If you want to have the clone of the last master commit, for currently developing work, you can uncomment and comment the adequate lines.

You can choose a proper directory name, appropriate to the component name in your working tree, independent of the name in the remote git repository. But sometimes the name of the current git repository is also proper for the component’s name.

The clone write the working files in the given dstdir. Hence this file should be arranged beside the component’s directory. This is proper because, instead the component you have the +gitclone…​sh files in a cleaned src tree. This may be the delivering form of an application without all components.

Now you can either immediately execute the +gitclone…​sh file, which writes both the repository and the files in the component’s directory. But this is against the recommendation in chapter separated git repository for each component. It is possible, of course.

The other variant is also simple: clone firstly, then move the .git repository to its specific other location, and write a .git file instead. Then checkout the files on the git-Revisionarchive-location to get a mirror.

The org.vishia.util.FileList restores the timestamp of all files, which is not done by a git checkout command respectively by the git clone. See also restoreTimestampsInSrc.html