Jobs/Production Orders

Your Guide to V5 Integration!

The integration of Jobs/Production Orders between V5 Traceability and a customer’s ERP system allows for work orders for batch and products formulas  to be scheduled within V5 Traceability. These details will then allow these jobs to be processed by V5 Terminal.

Table of Contents

1. Control Center Layout

In terms of how we can relate Job details to what we see in Control Center, we will be using the V5 API primarily to populate the upper ‘Job’ panel to import and export these details, with information in the lower ‘Job Line’ panel being automatically populated based on formula/recipe setup.

 

Using the V5 Gateway we can easily populate the data here using the ‘Job’ endpoint. However, a more advanced approach can be taken that makes use of the ‘PreBatch’ endpoint. We will look at using both of these below.

2. API Links

To find out more about the definitions for these database classes please see the following links:

Job

PreBatch

3. Integration Template

The integration template for Purchase Orders can be downloaded here.

4. Purchase Order Field Guide

4.1. Primary Keys

Primary Keys are the unique identifier for each table within the V5 API. For Job, these are:

jobNumberThe unique identifying code for the job.

formula.commodity.code – The unique identifying code for the formula to be scheduled.

 

For PreBatch, the primary keys are:

jobLine.job.jobNumberAs above, the unique identifying code for the job.

BatchSeq – The production sequence of the batch for that particular job line.

4.2. Required Fields

Other fields are required by SG to populate a valid Job line in Control Center. For Job, this is:

productsThe number of products required (related to the base UoM of the formula) per batch by the top (job) line of this job.

 

For PreBatch, this is:

productsThe number of products the individual batch should produce.

4.3. Preferred Fields

Preferred fields serve to add more information against the job in question, and while not required, are useful when it comes to added functionality within V5 Traceability. For Job, these could include:

batchesRequiredThe number of batches required for the job. If this is not present, then the system will use the amount of products (described above) and the base size + min/max products to calculate the number of batches to produce.

statusThe status that the job will be imported with (0 = Pending, 1 = Scheduled, 2 = In Progress, 3 = Complete, 4 = Testing, 5 = On Hold), jobs would usually be imported with the status of ‘1’, unless they are to be manually scheduled at a later date and so can be imported with the ‘0’ status.

These fields could also apply to PreBatch once we have traversed to the job object within the API.

4.4. Additional Fields

Additional fields can also be included for the Job endpoint, such as:

productionDateThe date the job is to be produced. Will import as blank if not present, allowing the jobs to be seen by Terminals at all times.

For PreBatch, we can traverse to this same field under jobLine.job.productionDate. However, we can also use the jobLine.productionDate field to affect the production date of the individual job line rather than the whole job itself, allowing us tighter control over our production process.

5. Methodology

Depending on the method used for our integration, we can use the following endpoints to facilitate imports. This can either be:

  • ‘Job’ or ‘Prebatch’ endpoints for JSON integration.
  • ‘Job’ endpoint for CSV fileshare integration.

Exports can be handled in a variety of ways, all of which we will look at below.

5.1. JSON – Import

Job:

We can make use of the ‘Job’ endpoint to create new jobs for production in V5 Traceability.

‘Job’ import Endpoint/URI –

http://host:port/V5-API/api/integrate/import/job

Endpoint Description

For this sample import using the ‘Job’ endpoint, we can structure a basic import file for a single job as below:

 

This sample JSON file can be downloaded here.

Using the API manual, we can see here that we are using several of the datapoints within the ‘Job’ class to structure this file. However, to correctly recall the formula code we want to produce we need to traverse first to the ‘Formula’ class, and then use the ‘Commodity’ class to define the ‘Code’ datapoint that we can use, and then nest this in the JSON file appropriately as we can see above.

We can see a summary of this dataflow process below:

 

If we run the JSON file above, we will see that our job is now visible in the ‘Production’ tab in Control Center.

 

You will notice that this has automatically created the job lines in the lower panel. The ‘Job’ endpoint will do this automatically based on the formula structure.

 

PreBatch:

We can see above that we can use the ‘Job’ endpoint to enter simple jobs into the production plan in V5 Traceability. However, if we want more precise control over our job setup, we can make use of the ‘PreBatch’ URI instead. This allows us to specify details such as individual batch size/product count and production location.

‘PreBatch’ import Endpoint/URI –

http://host:port/V5-API/api/integrate/import/pre_batch

Endpoint Description

For this sample import using the ‘PreBatch’ endpoint, we can structure a basic import file for a single job with multiple batches as below

 

This sample JSON file can be downloaded here.

So we can see by examining this file that we are creating a new job (Job-111) with 2 batches of a ‘Basic Bread Dough’ formula, one for 30lbs, to be made in manufacturing 1, and one for 50lbs, to be made in manufacturing 2. If we look at the API manual for the ‘PreBatch’ class, then we can see a similar traversing method at play here to construct our file.

If we run the above JSON file, then we will see this new job created, and then if we take a look at the batch view for it, we will see the differences in the batch sizes/production locations, as well as our preset custom batch numbers. Batch sequencing can also be included here if desired.

 

5.2. JSON – Export

In terms of receiving files relating to jobs back from the V5 API, there are a few different options from both the IntegrationExport and ExportTransaction classes that we can use, which we will look at here.

 

Job:

Individual Job Export Endpoint/URI –

http://host:port/V5-API/api/integrate/export/job/{jobNumber}

Endpoint Description 

This endpoint will export a JSON file with all related information for the specified job. If we run this request using the job number for the job we just imported, we will get a file that looks something like this:

 

An example JSON file of this type can be downloaded here.

 

All Jobs:

All Jobs Export Endpoint/URI

http://host:port/V5-API/api/integrate/export/jobs

Endpoint Description

This endpoint will export a JSON file that includes a list of all active jobs in the system, defined as jobs that have a status other than ‘completed’ or have been deleted from Control Center.

If we run this request, we will receive a return JSON file that is structured much the same as the above export for a singular job, but as mentioned will include all active jobs.

A sample JSON files of this type can be downloaded here.

 

We can also utilize transactional and log endpoints to retrieve more relevant information about jobs and production orders.

For System Log endpoints, these are:

Job Logs

Consumed/Produced System Logs

5.3. CSV – Import

Header/column definition filename: “job.csvh”

Completed header files should be placed in: “<installdir>\SG Control Center\gateway\import\column_defs”

Import CSV filename: “job-datetime.csv”

CSVs for import should be placed in: “<installdir>\SG Control Center\gateway\import”

 

Header File:

Header files will generally be compiled by SG Systems prior to CSV integrations taking place, but we can see a basic example of one we can use for jobs, using the previously defined class definitions, below:

 

For further information on how we structure these files, please see the main integration page. This sample header file can be downloaded here.

 

CSV Import File:

We can then use the defined order of data points in the header to structure our import file. SG Systems can supply a template file for this, listing the data point for each column to make things clearer when populating the list. Note that this first row can be ignored by Control Center and so can be kept in the file when submitting for import.

An example csv import file for jobs could look something like this:

 

This sample import file can be downloaded here.

With the import complete, we can see that these additional jobs have been added to Control Center alongside the job that we created using JSON import above.

  

5.4. CSV – Export

As with JSON exports, we have a couple of different options here. We can start by simply exporting a list of jobs within the system:

Jobs/Schedule:

Export CSV filename: “job.csvh”

Completed header files should be placed in: “<installdir>\SG Control Center\gateway\export/order”

Export CSV filename: “job-datetime.csv”

CSVs for exports will be generated in: “<installdir>\SG Control Center\gateway\export”

We can use the previously defined class definitions to structure a header file to define what data we will get back.

This export would need to be enabled in the Control Center’s Gateway section. We can choose here to export ‘Schedule’.

 
 

Header File:

In this case let’s just use the same header file that we used above for the import:

 
 

CSV Export File:

We would get a return CSV file that would look something like this:

 

An example of this file type can be downloaded here. Note that if a header file is not present in the indicated folder, then a CSV will export that features all the datapoints in the ‘Job’ class.

While this endpoint will only export a list of jobs, we can use a couple of different endpoints to retrieve more detailed information related to consumption and production.

 

Batch Logs:

Export CSV filename: “BatchLog-datetime.csv”

CSVs for exports will be generated in: “<installdir>\SG Control Center\gateway\export”

The simplest way to get job production data back from V5 Traceability in the form of a CSV is to simply use and process ‘batch logs’.

Again, this export would also need to be enabled in the Control Center’s Gateway section. We can choose here to export ‘Batch Logs’.

 
 

CSV Export File:

We would not need to build a header file here as the V5 Gateway will populate a set list of datapoints by default. With this export enabled, the system will export a ‘BatchLog’ file each time a batch within a job is completed by the system. A basic example of this is shown below:

 

An example of this file type can be downloaded here.

 

SystemLog:

If we want to receive more customized or detailed job production data (‘CONSUMED’/’PRODUCE’) back from V5 Traceability though, we can make use of the ‘SystemLog’ endpoint to do this.

Header/column definition filename: “SystemLog.csvh”

Completed header files should be placed in: “<installdir>\SG Control Center\gateway\export\order”

Export CSV filename: “SystemLog-datetime.csv”

CSVs for exports will be generated in: “<installdir>\SG Control Center\gateway\export”

For this type of export using this endpoint, we would structure our header using the ‘SystemLog’ database class definitions.

As with the other examples above, this export would need to be enabled in the Gateway section of Control Center:

 
 

Header File:

From here we can structure our header depending on what transactional data related to jobs that we want to receive back to the ERP. For our purposes here this could look like this:

 

This sample header file can be downloaded here.

 

CSV Export File:

If we then run a simple batch, the system log export for this will look something like this:

 

This example export can be downloaded here.

Was this page helpful?
YesNo