Tuesday, September 9, 2014

Agile interface development using ION

I'm attending Inforum next week and will be presenting there on our experience using ION as part of an Agile interface development project.

In my presentation I cover briefly the details of the methodology we developed for this project which allows us to build a basic interface between two systems in about 30 minutes.  In this post I'll outline this methodology and, over the next few posts, I will build out the detail of how this works.

Note that the interface approach I'll describe here focuses on the plumbing between the two systems and intentionally moves business logic for any transformations, control flow or quality of service required outside the interface itself.


There are 5 main components of the interface:

  1. M3 Web Service
  2. ION Endpoint for Web Service
  3. ION Endpoint for file system
  4. ION Document flow to connect Web Service and file system
  5. SQL server stored procedure to create XML file for ION

Each of these components take about 5 minutes to set up, and the way they are set up is the same for all web services which makes for a very quick, repeatable process.

M3 Web Service

A M3 Web Service can be built over M3 APIs, most M3 programs, or SQL statements or stored procedures.  We've covered how to build web services over M3 programs and SQL statements before.  As part of this series I'll cover how to build a web service over an API.

Via M3 Web Services and the M3 Web Services Designer (both licensed modules from Infor) we expose the API as a SOAP web service, which can be widely consumed to allow easy integration with M3.  Conveniently, ION can consume these Web Services and thereby expose standard M3 APIs for List, Get, Create, Update and Delete functions.

ION Endpoint for Web Service

This advises ION the url that the web service will be exposed on, and the methods that will be exposed via this web service.  We also define the authentication with M3 (using a M3 username and password) and the identifier that will be used to track the processing of the ION message.

Through this endpoint ION will be able to call the web service we defined above.

ION Endpoint for File System

For this interface design we're accepting an XML file from an external system (in this case SQL Server) and passing this through ION to M3.  So here we define the source directory where ION will find the XML files.  This source directory can be on a network share, so we also specify the authentication to connect to that share.

Note that we could instead of using the file connector be querying a SQL Server directly via a stored procedure, but we chose the file system approach as it was ultimately simpler and led itself more to code reuse than the direct to SQL approach did.

ION Document flow

The document flow connects the endpoints together and the XML data that will be passed from the file system to the web service.  The document flow is where we can use the ION mapper to define business logic (like we can in MEC), but with this approach we do not.  ION is simply used to pass the message between systems.  Again this design decision reduced complexity and enabled reuse.

SQL Server to create XML file

Here we use SQL server to create the XML file that ION requires.  The SQL Server command "FOR XML" allows us to take the output from a SQL query and generate an XML document.  We then use bcp to create the XML file.



When we put these components together the architecture looks like this:

The two components on the left refer to the solutions we were integrating with M3 via ION, in this case Infor's Advanced Planner and Advanced Scheduler and a manufacturing plant.

In my presentation I go into some detail about the challenges inherent in those integrations and why an Agile approach was required.

The following posts will cover:
  • Building a M3 Web Service over an API and SQL Server's FOR XML
  • Building the endpoints and document flow in ION
  • Control logic and other considerations

Tuesday, September 24, 2013

M3 13.1 / BE 15 & SmartOffice 10.1.1 UI improvements

I've been using BE 15 for a few weeks now with Smart Office 10.1.1 and I wanted to mention a few of my favourite UI improvements.

There are simple things like the Text Block now defaulting the active control to the Next button.  This is a great time saver when you have T in your panel sequence and you are pressing ENTER to move between the panels.
With BE 14 and below the default control was the text field so you needed to use the mouse or Tab key to get to the Next button to submit which interrupted a user's flow.  Now with BE 15 you can just keep pressing ENTER to move between the panels including Text panels.


The new toolbox screens are great.  Infor has built a new toolbox screen standard (see NCR 5418 on InforXtreme for details) which provides both additional toolbox screens but also enhanced capability within the toolbox screens.  

There are three major parts to this change:
  1. Views can now be linked to Sorting Orders.  With previous versions of the BE toolbox screens could use any view with any sorting order.  This would often lead to combinations that made little workflow sense.  You can now specify on a View that this should be restricted to a specified Sorting Order.
  2. Views are extended to 30 columns with up to 250 characters which is close to double the previous limits.  This allows us to make proper use of widescreen monitors.
  3. Columns within the Views can be logically formatted.  We can specify date formatting, numerical formatting, financial formatting (appending CR to negative values) etc.
This is a big change and very beneficial to the screens that support the new toolbox standard.  However screens that were previously toolbox screens (OIS300, MMS200, PMS100, PMS170, PPS170 etc.) have not been updated to support the new standard.  This is a shame as the new standard is a significant improvement over what was offered before.  This does make BE 15 a bit of a transitional release with different approaches to creation and maintenance of the toolbox screens depending on which program you are in.


Perhaps my favourite new function however is the addition of related tables and virtual fields.  These are a framework that allow us to easily build functionality that extends M3.  The related tables can be found from the new program CMS005.
Here I have created a new custom table and linked it to MMS001 based on the primary keys
With a very simple JScript I am now able to easily add additional fields to MMS001 from the custom table I created in CMS011 and read and write these via the native M3 APIs e.g.
It's also now possible to show these custom fields in the new toolbox screens.

We were able to do this already with Mashups, JScripts and Web Services, but this new functionality bakes this into the core of M3 and makes building extensions to M3 significantly easier.


Some of the functionality highlights I see as particularly valuable for my customers includes:
  • Customer Order workflow improvements to significantly reduce the number of panels required to enter a CO
  • Improvements to the Purchase Order process to ease the analysis of alternate acquisition methods and print a PO without sending it to the supplier
  • Improvements to the Balance ID toolbox to bring some of the DO creation functionality initially introduced in Warehouse Mobility back into the core M3 package
  • Changes to the stocktake functionality to reduce the cost of undertaking cyclic stocktakes by initiating stocktakes on empty locations and during picking
  • Significant improvements for the F&B industry including best before & harvest / kill dates, ageing in hours and minutes, GS1-style extensions to PO processes and addition of allocation restrictions to ensure that a customer is never shipped older product than has already been dispatched to them
  • Lot blending within silos and Lab Inspection changes to allow Lab Inspections at the Balance ID rather than lot level
  • An API for GLS850
  • Improvements to the ability to reconcile the logistics and general ledger systems
  • The ability to mark a user as "Deactivated" within MNS150 without deleting the user.

As time allows I'll cover some of these in additional posts.

Wednesday, September 4, 2013

M3 SDK alternative

There's a great post over on the Smart Office blog about how to build Smart Office applications when you don't have access to the Smart Office SDK.  It would be great to see wider distribution of the Smart Office SDK, as it allows developers to quickly and easily extend the core functionality of M3 using Visual Studio, but this is the next best thing.  All that appears to be missing is the logic for deploying a Smart Office package in LCM, though that appears to be a zip file with a manifest so should be able to be reverse-engineered.

Thursday, June 13, 2013

Infor's 10x webcast sessions

Infor have published their 10x sessions on the Internet.  The M3 session is accessible here here, and the other sessions are available from this link.  The M3 session provides a high-level overview of the 13.1 / BE v 15 functionality.

Monday, June 10, 2013

M3 BE v15 / 13.1

Infor has recently released M3 v 13.1 / BE v 15.  The release notes are all up on the InforXtreme documentation site.  Over the next few months I'll be working on a BE 14 -> BE 15 upgrade so I've been trawling through the release notes.  There's some interesting stuff there.  Highlights on a first read from a technology perspective are mashup / custom list enhancements and lots more programs supporting customisable Browse panels.

From a functionality perspective the extensions to the advance invoicing functionality look interesting, especially with the ability to enforce this and extend this into cashflow planning.  Lab Inspection approval at a Balance ID level is critically needed for the F&B industry so it's good to see it there.  Inbound transportation management is a good addition filling a hole in the M3 solution.  The API for GLS850 is well overdue filling an area where most sites have built custom solutions to solve.

There's lots of other interesting changes there and I'll review areas as I get to them in the coming months.

Friday, November 23, 2012

LSO & M3 discussion forums

I was trolling through some older posts on Karin's blog and spotted a reference to a LSO discussion forum in the comments.

The address is http://www.lawsonguru.com/forums/ux/lso/   Thanks Karin for pointing this out :-)

For generic M3 / Movex issues the discussion forum http://erp.ittoolbox.com/groups/technical-functional/intentia-l/ is a great place to ask questions and contribute to collective community knowledge base.

Friday, November 16, 2012

Lawson Web Services and .Net - a guest post

My friend and colleague Paul Grooby at Resene Paints in Wellington was kind enough to contribute the post below about his trials and tribulations with consuming Lawson Web Services in .Net.



Using Visual Studio .Net and Lawson Web Services.
Paul Grooby – 2012-11-02.

Using Lawson WebServices is a doddle, once you understand how it fits together. And if you want to generate simpler data entry and display screens (without SQL and wrapping API’s then this is the ticket).  Once you've created your web-services in Lawson Web Services and published these you can see them in a browser before you start testing them using SOAPUI.


Click on the WSDL file link above – this will load the XML file in the browser that you can then save.


Choose File ->Save from the menu to save the file as a WSDL file on your system.

Now a caveat – while Lawson Web Services generated xsd:datetime definitions for some of the API fields, it doesn't accept accept xsd:datetime inputs to them – so open the saved WSDL file in a text editor and do a find and replace on any datetime fields.


Replace the datetime fields with a string datatype. Once you've checked the remainder of the WSDL file its time to do the cool stuff and fire up Visual Studio (we’ll need this WSDL file later).  I'm going to create a blank web application for this demo (it could as easily be a console application or windows application).

I normally add the external WSDL files to the application and place them in their own folder in the project.  Locate the file and add to the project.


From here select the file and in the properties box of the studio project select and copy the file path.

With the file path copied to the clipboard, select the option to add a service reference to the application..

The following dialog box will be displayed:
Paste the file path into the Address bar – Don’t click Ok just yet – click the advanced button to check a couple of important options.

 Ensure that these two check boxes are checked and then click the OK button.

Give the webservice a relevant namespace. As a naming convention I tend to leave these the same name as the API / Webservice so that these can be traced back.

Once you click the Ok button a list of service endpoints will be displayed.

Click Ok – Visual Studio will do its thing and generate the necessary references and stubs for the program.

The project will now look like the following 

If your project doesn't show all of the files then click on the button highlighted to see them.

The two files with the extension 'svcinfo' contain details of where the endpoint of the service is located. This is important in that you’d normally develop, test and then deploy in the different environments. If you don’t remember that the endpoints are embedded in these files you could be pointing at the wrong environment.

However to save a little hassle, and this is why we copied the file locally we can edit our WSDL file that's contained in the project.

The endpoint is embedded in the WSDL – if you change this to point to say PROD and regenerate the files, the files with the svcinfo extension will be updated – follow this process here:

1: Alter the end point in the WSDL file. Save.

2: Against the service select the option to configure the service reference
This will display the follow dialogue box:
Make sure the address is pointing to your local file then click Ok.  

From the project menu for the service select the option to update the service reference:

You can check that the service has taken by opening the svcinfo files and checking the end point.

This process also updates the web.config file in this case so we just need to check that this has been updated.

If we see two endpoints with the same contract we’re in trouble – delete the incorrect one as the process will not work.

So that’s all there is to it for the adding of a reference. If you want to use the interactions you’ll need to write a bit of code. The following section has a simple class that consumes the objects used above.

The following class is used to interact with the webservice (I won’t teach you about programming – suffice to say this works a treat).  It creates a 'Customer' object, calls the various methods to populate the relevant bits of information and retains that information.

using System;
using SampleLWSApplication.CRS610MI;

namespace SampleLWSApplication
{
    public class Customer
    {
        #region PrivateVariables
        private String _company = "";

        // also based on this customer we should be able to also use the GetBasicData
        private String _customername = "";

        private String _customernumber = "";
        private String _customerpassword = "";
        private String _customersordernumber = "";
        private String _division = "";
        private String _facility= "";

        // get the other details such as order number , etc
        private String _ordertype = "";
        private String _password = "";
        private String _payer = "";
        private DateTime _requesteddeliverydate ;
        private String _transactionreason= "";
        private String _username = "";
        private String _warehouse= "";
        private String _emailaddress = "";
        private String _specialnstructions = "";
        #endregion

        public Customer()
        {
            // create the class
        }
        /// <summary>
        /// Get CustomerData is to pull the data through for the customer
        /// </summary>
        /// <param name="CustomerNumber"></param>
        public void GetBasicData(String CustomerNumber)
        {
            // code to do a lookup against the customer and retrieve the basic data from the system
            // always need a client
            CRS610MI.CRS610MIClient crs610MIClient = new CRS610MIClient();
            // create the soap header
            CRS610MI.headerType mwsheader = new CRS610MI.headerType();
            mwsheader.user = _username; // user name for the API from the configuration file
            mwsheader.password = _password; // password for the API from the configuration file
            mwsheader.company = _company;
            mwsheader.division = "xxx";
            // create a new object to hold the details
            CRS610MI.GetBasicDataItem getBasicData = new GetBasicDataItem();
            // set the variables for the process
            getBasicData.Company = 100;
            // set to the passed
            getBasicData.CustomerNumber = CustomerNumber.ToUpper(); // force to upper case

            CRS610MI.GetBasicDataCollection collection = new GetBasicDataCollection();
            collection.maxRecords = 1000;
            collection.GetBasicDataItem = new GetBasicDataItem[1];
            collection.GetBasicDataItem[0] = getBasicData;

            CRS610MI.GetBasicDataRequest crs610MIRequest = new GetBasicDataRequest(mwsheader, collection);

            try
            {
                // pul back the data responses
                CRS610MI.GetBasicDataResponse responseItem = crs610MIClient.GetBasicData(crs610MIRequest);
                // turn this into an array
                CRS610MI.GetBasicDataResponseItem[] ri = responseItem.GetBasicDataResponse1;
                // loop through the array
                for (int i = 0; i < ri.Length; i++)
                {
                    // output the data
                    // set the values
                    _customername = ri[i].CustomerName;
                    _transactionreason = ri[i].FreeField3;
                    _division = ri[i].Division;
                }
            }
            catch (System.ServiceModel.FaultException ex)
            {
                Console.Write(ex.Message);
            }
        }

        /// <summary>
        /// Get CustomerData is to pull the data through for the customer
        /// </summary>
        /// <param name="CustomerNumber"></param>
        public void GetOrderInfo(String CustomerNumber)
        {
            // code to do a lookup against the customer and retrieve the basic data from the system
            // always need a client
            CRS610MI.CRS610MIClient crs610MIClient = new CRS610MIClient();
            // create the soap header
            CRS610MI.headerType mwsheader = new CRS610MI.headerType();
            mwsheader.user = _username; // user name for the API from the configuration file
            mwsheader.password = _password; // password for the API from the configuration file
            mwsheader.company = _company;
            mwsheader.division = "xxx";
            // create a new object to hold the details
            CRS610MI.GetOrderInfoItem getOrderInfo = new GetOrderInfoItem();
            // set the variables for the process
            getOrderInfo.Company = 100;
            // set to the passed
            getOrderInfo.CustomerNumber = CustomerNumber.ToUpper(); // force to upper case

            CRS610MI.GetOrderInfoCollection collection = new GetOrderInfoCollection();
            collection.maxRecords = 1000;
            collection.GetOrderInfoItem = new GetOrderInfoItem[1];
            collection.GetOrderInfoItem[0] = getOrderInfo;

            CRS610MI.GetOrderInfoRequest crs610MIRequest = new GetOrderInfoRequest(mwsheader, collection);

            try
            {
                // pull back the data responses
                CRS610MI.GetOrderInfoResponse responseItem = crs610MIClient.GetOrderInfo(crs610MIRequest);
                // turn this into an array
                CRS610MI.GetOrderInfoResponseItem[] ri = responseItem.GetOrderInfoResponse1;
                // loop through the array
                for (int i = 0; i < ri.Length; i++)
                {
                    // output the data
                    // set the values
                    _facility = ri[i].Facility;
                    _warehouse = ri[i].Warehouse;
                    _payer = ri[i].Payer;
                    _division = ri[i].Division;
                    _ordertype = ri[i].CustomerOrderType;
                }
            }
            catch (System.ServiceModel.FaultException ex)
            {
                Console.Write(ex.Message);
            }
        }

        // Properties here

        #region Properties

        public String SpecialInstructions
        {
            get { return _specialnstructions; }
            set {_specialnstructions = value;}


        }
        public String Company
        {
            get { return _company; }
            set { _company = value; }
        }

        public String EmailAddress
        {
            get { return _emailaddress; }
            set { _emailaddress = value; }
        }
        public String CustomerName
        {
            get { return _customername; }
            set { _customername = value; }
        }

        public String CustomerNumber
        {
            get { return _customernumber; }
            set { _customernumber = value; }
        }

        public String CustomerOrderNumber
        {
            get { return _customersordernumber; }
            set { _customersordernumber = value; }
        }

        public String CustomerPassword
        {
            get { return _customerpassword; }
            set { _customerpassword = value; }
        }

        public String Division
        {
            get { return _division; }
            set { _division = value; }
        }

        public String Facility
        {
            get { return _facility; }
            set { _facility = value; }
        }

        public String OrderType
        {
            get { return _ordertype; }
            set { _ordertype = value; }
        }

        public String Password
        {
            get { return _password; }
            set { _password = value; }
        }

        public String Payer
        {
            get { return _payer; }
            set { _payer = value; }
        }

        public DateTime RequestedDeliveryDate
        {
            get { return _requesteddeliverydate; }
            set { _requesteddeliverydate = value; }
        }

        public String TransactionReason
        {
            get { return _transactionreason; }
            set { _transactionreason = value; }
        }

        public String UserName
        {
            get { return _username; }
            set { _username = value; }
        }

        public String Warehouse
        {
            get { return _warehouse; }
            set { _warehouse = value; }
        }

        #endregion Properties

       
    }
}

A note or two about the code.

The basic structure for each call is the following:
  • Create a client
  • Create a mwsheader object (this is the SOAP Header)
  • Create a base item for the method to pass parameter
  • Create a collection of correct return types
  • Add the base item as array 0 to pass
  • Add a request of the correct type
  • Call the method and return a collection of responseitems of the correct type
  • Check for any errors and if none use the returned collection of types in any manner that you like – for 'Get' methods its likely that there is only one object returned, for List methods there is likely to be 0 to many.

Importantly – You’ll also find problems with the service if using a List type operation where the amount of data returned by the service exceeds the file size (not sure where this is set but it seems to be around 64K) – to get around this problems use the following syntax when creating the client.

Note that I've also added complexity by having this endpoint embedded in the web.config file – its' my style of programming.

String endPoint = System.Configuration.ConfigurationManager.AppSettings["INTERNETORDERSENDPOINT"];

WebOrders.InternetOrdersClient orderClient = new InternetOrdersClient(new asicHttpBinding(BasicHttpSecurityMode.None) { MaxReceivedMessageSize = 2147483647, MaxBufferSize = 2147483647 }, new EndpointAddress(endPoint));



In the above example we've set the return size to a huge number – this get around the limitation but still may not be enough – in which case you’d probably want to look at the fields being output from the Web service and whether you need them all.

Enjoy :-)