Jump to content

Data Flow Connector using High Watermark


Recommended Posts

Data Flow Connector using High Watermark

Using a high watermark listener to select only new records for your data flow.

After creating the project you will need to create a new channel in that project.

Creating a Channel

Right click Channels, select New, Channel.

image.png.d43ba70e5624049e31a21bcf06512d76.png

Fill in the Name of the channel click Finish.

image.png.77253d7a77162e5ced1ea26ea6bfcaf3.png

Creating a High Watermark Listener

This brings up the Channel Builder.

Click on listener, and on the right click on change type.

 

image.thumb.png.547b1a888c474fae9de938b395c47c2b.png

Select RDB High Watermark, click on Finish.

image.png.a1afd697cef2dd22533a56f5818ec4bb.png

Several pieces of information are needed to configure the High Watermark listener.

Keep the defaults, but provide the following:

Expand General:

Query SQL: This is the query that the listener will run to identify what data needs to be picked up for this run. In this example, we are using a postgresql table called etl_data.

select max("UPDATE_TMSTMP") as "UPDATE_TMSTMP" from etl_data WHERE 'UPDATE_TMSTMP' > '?' order by "UPDATE_TMSTMP"

HWM field: The name of the field that has the value that will be tested for the HWM(High Watermark).

UPDATE_TMSTMP

HWM Persistence Type: Describes where persistence for this channel will be maintained, i.e. File, RDBMS.

In this case ‘file’

HWM Persistence Location: The location of the file that will manage persistence.

C:\temp\highw.txt

HWM Default: The default value for the initial run of this application.

2021-01-21 12:00:00

image.thumb.png.b8f30985e512642e4708011b9e6e7a72.png

Expand Connection

Keep the defaults, Except for:

Driver: The Driver name for the JDBC driver of the database you are using. In this case for Postgresql

org.postgresql.Driver

URL: The url used to connect to the database using JDBC.

jdbc:postgresql://guedmnigsw05:5432/omni

 

image.png.e3dc12c6cb4a0d8ec731c5ba44a9c8ee.png

Nothing else needs to be configured for the Listener.

Creating a flow:

Click on flow and then click on the + icon 

image.thumb.png.80824c7e1f879e537d357f1f65ed386b.png

 

Fill in a Name for the flow and hit Finish.

image.png.4587daf1cde3b292c2c2183a54caed89.png

Open the newly created flow by clicking on the link. This will bring up the iWay pallet.

image.png.32a35cf3a137d1160a9e9e33654cff8c.png

Configuring a File Connector

Drag a file connector on to the pallet. Connecting it between start and end.

 

image.png.6271a5b9b60a826541bf35a1c6329573.png

 

Select Action: The action you want this connector to perform.

In this case, ‘read a file from disk’ 

Expand Source:

File Name: The Name and location of the file you want to perform an action on.

In this case, ‘c:/temp/highw.txt’

Format: Format of the file.

In this case ‘flat’ 

image.png.fd286c0336b7c0993458a36b4045c4ac.png

Expand Document settings: 

Keep defaults, except for Tag.

Tag: Tag used while processing this file. The data in this file will be converted into a XML document during processing.

Adding Post-Execution variables to the File Connector

From properties; Post-Execution:

Click on the + icon

 

image.thumb.png.0baafd8ecd948e4412cdebca793eae21.png

 

Add the following:

hwm_val: This variables takes the value in the test tag, from the XML created from the flat file and puts that value in the variable hwm_val. This flow is reading the HWM file, so what this is actually doing is taking the HWM value and puting it in a variable.

xpath(/test)

sqlstat: This is just the beginning part of the sql that will be used every time the flow runs. 

select * from etl_data where "UPDATE_TMSTMP" > 

Hwm_q: Takes the hwm_val variable and puts single quotes around it.

_qval(_sreg(hwm_val),single) 

sqlstatement: This variable takes the sqlstat variable and the hwm_q variable and concatenates them together creating valid SQL.

_concat(_sreg(sqlstat),_sreg(hwm_q))

 

image.thumb.png.f904c96a52115ab27baeaf7c0d17215f.png

 

Configuring a Data Flow

Drag a Data Flow component on to the pallet after the Variables component. 

Go to properties for this component

Click the + sign after configuration.

Under the Source DB tab, ‘JNDI/JDBC Configurations:’ click on the ‘...’ select add provider.

image.png.624822eb2daeff04fe704e3b17e26800.png

 

Configure the provider:

Driver Class: Driver class for the RDBMS that is being accessed. 

org.postgresql.Driver

Connection URL: Must be a legitimate URL to reach the RDBMS 

 jdbc:postgresql://guedmnigsw05.dev.tibco.com:5432/omni

User: Valid user

Password: Valid password

Click on Finish.

Configure Target DB:

Click on Target DB tab.

Select JNDI/JDBC Configurations.

Select the provider. 

In this case Jdbc.provider.1

Click Finish.

image.png.05acade8c26679e476fb4d6e36ff2140.png

 

Expand Source.

Use the following syntx to read the sqlstatement variable in as the SQL.

_sreg(sqlstatement) 

Expand target and add the insert statement that will write to the target table:

 

image.thumb.png.f37f3ca8cb19bafb5a1c0ca039f499e2.png

INSERT INTO etl_data_2 ("ADDRESS_LINE1","ADDRESS_LINE2","ADDRESS_CITY","ADDRESS_STATE","ADDRESS_POSTAL","COUNTRY_CODE","LAST_NAME","FIRST_NAME","DEA","EMAIL","SSN","PHONE_NUMBER","CREDIT_CARD","VIN","IBAN","SWIFT_CODES","EIN_ITIN","UPDATE_TMSTMP") values(?ADDRESS_LINE1,?ADDRESS_LINE2,?ADDRESS_CITY,?ADDRESS_STATE,?ADDRESS_POSTAL,?COUNTRY_CODE,?LAST_NAME,?FIRST_NAME,?DEA,?EMAIL,?SSN,?PHONE_NUMBER,?CREDIT_CARD,?VIN,?IBAN,?SWIFT_CODES,?EIN_ITIN,?UPDATE_TMSTMP)l 

 

image.thumb.png.66e6cc2d088ed7643f66435606d3b0f1.png

 

Adding a loop to a move flow control

Drag a Move Flow Control on to the Pallet. After the Data Flow Component.

Select action of move document, in proerties of the Move Flow Control.

image.png.05aa2df1126289acfce4ecfb433b2cb0.png

For the properties of the connector configure for OnSuccess.

 

image.thumb.png.4088646cf376d8fa36a68d6ddd4785b5.png

Create a loop back from the move flow control to the data flow. The configuration can be left with default values for the loop execution path.

 

image.thumb.png.5263618d02b140c0e8b0fa3f016fb89c.png

 

 

image.thumb.png.2efe7465d74b2749f53a5a2d910025cc.png

Configure the execution path between the move and end as below:

Configure image.thumb.png.c26b01b5a9f1e22bb0a834c3603452c2.pnga File Writer

I added a file writer if the file reader or the data flow failed, and an end after the file writer.

I also added a Junction to write any error messages to the same file.

Drag a Junction Flow Control onto the pallet.

Drag a File Connector onto the pallet.

Drag a End Flow Control onto the pallet.

Place these as they are in the screenshot below.

image.thumb.png.4df43b9a99de5a87723487b26bdf4a84.pnguration No configuration is needed for the Junction Flow Control.

Configure the Execution Path between the File reader and the Junction as follows.

image.png.8c5d763a6e9154b74770acb79ef69a20.png

Configure the Execution Path between the Data Flow and the Junction as follows.

image.thumb.png.d1a0df0076b0d9bde436b8424a76506f.png

No configuration is needed between the execution path and the Junction. 

Configuring the File Connector(writer):

The Source tab can be left alone. This will pass what was returned by the data flow, which should be error information about the failure.

The Target tab should be configured as below: 

This will right out any errors to the error.txt file

image.png.66967f571fc0a563d534865b2381da9a.png

Configure the Execution Path between the File Connector(writer) and End.1 as below.

image.thumb.png.593f4bab7735eed5fca2c4e5bc54adf0.png

Appendix DDL to create the 2 tables:

(these will have to be eddited for your database/schema, and data will need to be added to etl_data)

etl_data:

-- Table: public.etl_data

-- DROP TABLE IF EXISTS public.etl_data;

CREATE TABLE IF NOT EXISTS public.etl_data
(
    "ADDRESS_LINE1" character varying(48) COLLATE pg_catalog."default",
    "ADDRESS_LINE2" character varying(1) COLLATE pg_catalog."default",
    "ADDRESS_CITY" character varying(27) COLLATE pg_catalog."default",
    "ADDRESS_STATE" character varying(2) COLLATE pg_catalog."default",
    "ADDRESS_POSTAL" character varying(12) COLLATE pg_catalog."default",
    "COUNTRY_CODE" character varying(1) COLLATE pg_catalog."default",
    "LAST_NAME" character varying(16) COLLATE pg_catalog."default",
    "FIRST_NAME" character varying(12) COLLATE pg_catalog."default",
    "DEA" character varying(1) COLLATE pg_catalog."default",
    "EMAIL" character varying(31) COLLATE pg_catalog."default",
    "SSN" character varying(13) COLLATE pg_catalog."default",
    "PHONE_NUMBER" character varying(17) COLLATE pg_catalog."default",
    "CREDIT_CARD" character varying(1) COLLATE pg_catalog."default",
    "VIN" character varying(21) COLLATE pg_catalog."default",
    "IBAN" character varying(1) COLLATE pg_catalog."default",
    "SWIFT_CODES" character varying(1) COLLATE pg_catalog."default",
    "EIN_ITIN" character varying(1) COLLATE pg_catalog."default",
    "UPDATE_TMSTMP" timestamp without time zone
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS public.etl_data
    OWNER to omni;
-- Index: etl_data_2_ix

-- DROP INDEX IF EXISTS public.etl_data_2_ix;

CREATE UNIQUE INDEX IF NOT EXISTS etl_data_2_ix
    ON public.etl_data USING btree
    ("VIN" COLLATE pg_catalog."default" ASC NULLS LAST, "SSN" COLLATE pg_catalog."default" ASC NULLS LAST)
    TABLESPACE pg_default;
-- Index: etl_dataix

-- DROP INDEX IF EXISTS public.etl_dataix;

CREATE UNIQUE INDEX IF NOT EXISTS etl_dataix
    ON public.etl_data USING btree
    ("VIN" COLLATE pg_catalog."default" ASC NULLS LAST, "SSN" COLLATE pg_catalog."default" ASC NULLS LAST)
    TABLESPACE pg_default;

 

etl_data_2:

-- Table: public.etl_data_2

-- DROP TABLE IF EXISTS public.etl_data_2;

CREATE TABLE IF NOT EXISTS public.etl_data_2
(
    "ADDRESS_LINE1" character varying(48) COLLATE pg_catalog."default",
    "ADDRESS_LINE2" character varying(1) COLLATE pg_catalog."default",
    "ADDRESS_CITY" character varying(27) COLLATE pg_catalog."default",
    "ADDRESS_STATE" character varying(2) COLLATE pg_catalog."default",
    "ADDRESS_POSTAL" character varying(12) COLLATE pg_catalog."default",
    "COUNTRY_CODE" character varying(1) COLLATE pg_catalog."default",
    "LAST_NAME" character varying(16) COLLATE pg_catalog."default",
    "FIRST_NAME" character varying(12) COLLATE pg_catalog."default",
    "DEA" character varying(1) COLLATE pg_catalog."default",
    "EMAIL" character varying(31) COLLATE pg_catalog."default",
    "SSN" character varying(13) COLLATE pg_catalog."default",
    "PHONE_NUMBER" character varying(17) COLLATE pg_catalog."default",
    "CREDIT_CARD" character varying(1) COLLATE pg_catalog."default",
    "VIN" character varying(21) COLLATE pg_catalog."default",
    "IBAN" character varying(1) COLLATE pg_catalog."default",
    "SWIFT_CODES" character varying(1) COLLATE pg_catalog."default",
    "EIN_ITIN" character varying(1) COLLATE pg_catalog."default",
    "UPDATE_TMSTMP" timestamp without time zone
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS public.etl_data_2
    OWNER to omni;

image.png

  • Like 1
Link to comment
Share on other sites

21 hours ago, Greg W said:

Data Flow Connector using High Watermark

iWay Service Manager ships with a unique connector which empowers developers to build flows which can implement ETL based operations. Challenges developers face in building ETL based integration solutions often come down to both performance and memory usage. The Dataflow connector is designed to assist in these challenges. There are time when you may want your Dataflows to be controled by changes that occur in the database. Using the high watermark listener will allow you to automatically run data flows as the data changes as opposed to running on a schedule.

Using a high watermark listener to select only new records for your data flow.

After creating the project you will need to create a new channel in that project.

Creating a Channel

Right click Channels, select New, Channel.

image.png.d43ba70e5624049e31a21bcf06512d76.png

Fill in the Name of the channel click Finish.

image.png.77253d7a77162e5ced1ea26ea6bfcaf3.png

Creating a High Watermark Listener

This brings up the Channel Builder.

Click on listener, and on the right click on change type.

 

image.thumb.png.547b1a888c474fae9de938b395c47c2b.png

Select RDB High Watermark, click on Finish.

image.png.a1afd697cef2dd22533a56f5818ec4bb.png

Several pieces of information are needed to configure the High Watermark listener.

Keep the defaults, but provide the following:

Expand General:

Query SQL: This is the query that the listener will run to identify what data needs to be picked up for this run. In this example, we are using a postgresql table called etl_data.

select max("UPDATE_TMSTMP") as "UPDATE_TMSTMP" from etl_data WHERE 'UPDATE_TMSTMP' > '?' order by "UPDATE_TMSTMP"

HWM field: The name of the field that has the value that will be tested for the HWM(High Watermark).

UPDATE_TMSTMP

HWM Persistence Type: Describes where persistence for this channel will be maintained, i.e. File, RDBMS.

In this case ‘file’

HWM Persistence Location: The location of the file that will manage persistence.

C:\temp\highw.txt

HWM Default: The default value for the initial run of this application.

2021-01-21 12:00:00

image.thumb.png.b8f30985e512642e4708011b9e6e7a72.png

Expand Connection

Keep the defaults, Except for:

Driver: The Driver name for the JDBC driver of the database you are using. In this case for Postgresql

org.postgresql.Driver

URL: The url used to connect to the database using JDBC.

jdbc:postgresql://guedmnigsw05:5432/omni

 

image.png.e3dc12c6cb4a0d8ec731c5ba44a9c8ee.png

Nothing else needs to be configured for the Listener.

Creating a flow:

Click on flow and then click on the + icon 

image.thumb.png.80824c7e1f879e537d357f1f65ed386b.png

 

Fill in a Name for the flow and hit Finish.

image.png.4587daf1cde3b292c2c2183a54caed89.png

Open the newly created flow by clicking on the link. This will bring up the iWay pallet.

image.png.32a35cf3a137d1160a9e9e33654cff8c.png

Configuring a File Connector

Drag a file connector on to the pallet. Connecting it between start and end.

 

image.png.6271a5b9b60a826541bf35a1c6329573.png

 

Select Action: The action you want this connector to perform.

In this case, ‘read a file from disk’ 

Expand Source:

File Name: The Name and location of the file you want to perform an action on.

In this case, ‘c:/temp/highw.txt’

Format: Format of the file.

In this case ‘flat’ 

image.png.fd286c0336b7c0993458a36b4045c4ac.png

Expand Document settings: 

Keep defaults, except for Tag.

Tag: Tag used while processing this file. The data in this file will be converted into a XML document during processing.

Adding Post-Execution variables to the File Connector

From properties; Post-Execution:

Click on the + icon

 

image.thumb.png.0baafd8ecd948e4412cdebca793eae21.png

 

Add the following:

hwm_val: This variables takes the value in the test tag, from the XML created from the flat file and puts that value in the variable hwm_val. This flow is reading the HWM file, so what this is actually doing is taking the HWM value and puting it in a variable.

xpath(/test)

sqlstat: This is just the beginning part of the sql that will be used every time the flow runs. 

select * from etl_data where "UPDATE_TMSTMP" > 

Hwm_q: Takes the hwm_val variable and puts single quotes around it.

_qval(_sreg(hwm_val),single) 

sqlstatement: This variable takes the sqlstat variable and the hwm_q variable and concatenates them together creating valid SQL.

_concat(_sreg(sqlstat),_sreg(hwm_q))

 

image.thumb.png.f904c96a52115ab27baeaf7c0d17215f.png

 

Configuring a Data Flow

Drag a Data Flow component on to the pallet after the Variables component. 

Go to properties for this component

Click the + sign after configuration.

Under the Source DB tab, ‘JNDI/JDBC Configurations:’ click on the ‘...’ select add provider.

image.png.624822eb2daeff04fe704e3b17e26800.png

 

Configure the provider:

Driver Class: Driver class for the RDBMS that is being accessed. 

org.postgresql.Driver

Connection URL: Must be a legitimate URL to reach the RDBMS 

 jdbc:postgresql://guedmnigsw05.dev.tibco.com:5432/omni

User: Valid user

Password: Valid password

Click on Finish.

Configure Target DB:

Click on Target DB tab.

Select JNDI/JDBC Configurations.

Select the provider. 

In this case Jdbc.provider.1

Click Finish.

image.png.05acade8c26679e476fb4d6e36ff2140.png

 

Expand Source.

Use the following syntx to read the sqlstatement variable in as the SQL.

_sreg(sqlstatement) 

Expand target and add the insert statement that will write to the target table:

 

image.thumb.png.f37f3ca8cb19bafb5a1c0ca039f499e2.png

INSERT INTO etl_data_2 ("ADDRESS_LINE1","ADDRESS_LINE2","ADDRESS_CITY","ADDRESS_STATE","ADDRESS_POSTAL","COUNTRY_CODE","LAST_NAME","FIRST_NAME","DEA","EMAIL","SSN","PHONE_NUMBER","CREDIT_CARD","VIN","IBAN","SWIFT_CODES","EIN_ITIN","UPDATE_TMSTMP") values(?ADDRESS_LINE1,?ADDRESS_LINE2,?ADDRESS_CITY,?ADDRESS_STATE,?ADDRESS_POSTAL,?COUNTRY_CODE,?LAST_NAME,?FIRST_NAME,?DEA,?EMAIL,?SSN,?PHONE_NUMBER,?CREDIT_CARD,?VIN,?IBAN,?SWIFT_CODES,?EIN_ITIN,?UPDATE_TMSTMP)l 

 

image.thumb.png.66e6cc2d088ed7643f66435606d3b0f1.png

 

Adding a loop to a move flow control

Drag a Move Flow Control on to the Pallet. After the Data Flow Component.

Select action of move document, in proerties of the Move Flow Control.

image.png.05aa2df1126289acfce4ecfb433b2cb0.png

For the properties of the connector configure for OnSuccess.

 

image.thumb.png.4088646cf376d8fa36a68d6ddd4785b5.png

Create a loop back from the move flow control to the data flow. The configuration can be left with default values for the loop execution path.

 

image.thumb.png.5263618d02b140c0e8b0fa3f016fb89c.png

 

 

image.thumb.png.2efe7465d74b2749f53a5a2d910025cc.png

Configure the execution path between the move and end as below:

Configure image.thumb.png.c26b01b5a9f1e22bb0a834c3603452c2.pnga File Writer

I added a file writer if the file reader or the data flow failed, and an end after the file writer.

I also added a Junction to write any error messages to the same file.

Drag a Junction Flow Control onto the pallet.

Drag a File Connector onto the pallet.

Drag a End Flow Control onto the pallet.

Place these as they are in the screenshot below.

image.thumb.png.4df43b9a99de5a87723487b26bdf4a84.pnguration No configuration is needed for the Junction Flow Control.

Configure the Execution Path between the File reader and the Junction as follows.

image.png.8c5d763a6e9154b74770acb79ef69a20.png

Configure the Execution Path between the Data Flow and the Junction as follows.

image.thumb.png.d1a0df0076b0d9bde436b8424a76506f.png

No configuration is needed between the execution path and the Junction. 

Configuring the File Connector(writer):

The Source tab can be left alone. This will pass what was returned by the data flow, which should be error information about the failure.

The Target tab should be configured as below: 

This will right out any errors to the error.txt file

image.png.66967f571fc0a563d534865b2381da9a.png

Configure the Execution Path between the File Connector(writer) and End.1 as below.

image.thumb.png.593f4bab7735eed5fca2c4e5bc54adf0.png

Appendix DDL to create the 2 tables:

(these will have to be eddited for your database/schema, and data will need to be added to etl_data)

etl_data:

-- Table: public.etl_data

-- DROP TABLE IF EXISTS public.etl_data;

CREATE TABLE IF NOT EXISTS public.etl_data
(
    "ADDRESS_LINE1" character varying(48) COLLATE pg_catalog."default",
    "ADDRESS_LINE2" character varying(1) COLLATE pg_catalog."default",
    "ADDRESS_CITY" character varying(27) COLLATE pg_catalog."default",
    "ADDRESS_STATE" character varying(2) COLLATE pg_catalog."default",
    "ADDRESS_POSTAL" character varying(12) COLLATE pg_catalog."default",
    "COUNTRY_CODE" character varying(1) COLLATE pg_catalog."default",
    "LAST_NAME" character varying(16) COLLATE pg_catalog."default",
    "FIRST_NAME" character varying(12) COLLATE pg_catalog."default",
    "DEA" character varying(1) COLLATE pg_catalog."default",
    "EMAIL" character varying(31) COLLATE pg_catalog."default",
    "SSN" character varying(13) COLLATE pg_catalog."default",
    "PHONE_NUMBER" character varying(17) COLLATE pg_catalog."default",
    "CREDIT_CARD" character varying(1) COLLATE pg_catalog."default",
    "VIN" character varying(21) COLLATE pg_catalog."default",
    "IBAN" character varying(1) COLLATE pg_catalog."default",
    "SWIFT_CODES" character varying(1) COLLATE pg_catalog."default",
    "EIN_ITIN" character varying(1) COLLATE pg_catalog."default",
    "UPDATE_TMSTMP" timestamp without time zone
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS public.etl_data
    OWNER to omni;
-- Index: etl_data_2_ix

-- DROP INDEX IF EXISTS public.etl_data_2_ix;

CREATE UNIQUE INDEX IF NOT EXISTS etl_data_2_ix
    ON public.etl_data USING btree
    ("VIN" COLLATE pg_catalog."default" ASC NULLS LAST, "SSN" COLLATE pg_catalog."default" ASC NULLS LAST)
    TABLESPACE pg_default;
-- Index: etl_dataix

-- DROP INDEX IF EXISTS public.etl_dataix;

CREATE UNIQUE INDEX IF NOT EXISTS etl_dataix
    ON public.etl_data USING btree
    ("VIN" COLLATE pg_catalog."default" ASC NULLS LAST, "SSN" COLLATE pg_catalog."default" ASC NULLS LAST)
    TABLESPACE pg_default;

 

etl_data_2:

-- Table: public.etl_data_2

-- DROP TABLE IF EXISTS public.etl_data_2;

CREATE TABLE IF NOT EXISTS public.etl_data_2
(
    "ADDRESS_LINE1" character varying(48) COLLATE pg_catalog."default",
    "ADDRESS_LINE2" character varying(1) COLLATE pg_catalog."default",
    "ADDRESS_CITY" character varying(27) COLLATE pg_catalog."default",
    "ADDRESS_STATE" character varying(2) COLLATE pg_catalog."default",
    "ADDRESS_POSTAL" character varying(12) COLLATE pg_catalog."default",
    "COUNTRY_CODE" character varying(1) COLLATE pg_catalog."default",
    "LAST_NAME" character varying(16) COLLATE pg_catalog."default",
    "FIRST_NAME" character varying(12) COLLATE pg_catalog."default",
    "DEA" character varying(1) COLLATE pg_catalog."default",
    "EMAIL" character varying(31) COLLATE pg_catalog."default",
    "SSN" character varying(13) COLLATE pg_catalog."default",
    "PHONE_NUMBER" character varying(17) COLLATE pg_catalog."default",
    "CREDIT_CARD" character varying(1) COLLATE pg_catalog."default",
    "VIN" character varying(21) COLLATE pg_catalog."default",
    "IBAN" character varying(1) COLLATE pg_catalog."default",
    "SWIFT_CODES" character varying(1) COLLATE pg_catalog."default",
    "EIN_ITIN" character varying(1) COLLATE pg_catalog."default",
    "UPDATE_TMSTMP" timestamp without time zone
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS public.etl_data_2
    OWNER to omni;

image.png

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
  • Create New...