Monday, May 25, 2020

JSON mapping and Transform

JSON mapping and Transform

Current Issue and requirement


Nowadays most applications are using APIs to interact with internal distributed applications or interact with 3rd party applications. A most common need of APIs are their results needs to map with different custom JSON format. If you are working in .net C# like strongly typed language then you ended up in followings to map different format JSON.

1. Create Source C# POCO object
2. Create Result C# POCO Object
3. DeSerialize JSON to C# Object for both Source and destination class.
4. Create Mapper class which can Map two C# class and generates resulted in object by mapping property with a different name. Here you can use the "Automapper" library.

Use case scenario


This kind of need gets complexer when you dealing with multiple 3rd party APIs needs to map a single result type json. Here are couple of scenarios for the same.
1. Air ticket booking. Invoke multiple airline apis for the ticket booking request and associates the result object and show it to users.
2. Quotation or Pricing for an Item from multiple stores or vendor.
3. Hotel booking etc.

In above use case scenarios, If you wanted to build a system that may want to add new API vendor then Developers has to do work on all the above-mentioned steps, and application implementation is not closed for maintenance. (2. Open closed principle of SOLID principles)

Solutions:

For this issue I found Querying JSON with complex JSON path interesting way to map it. This is explained in [Newtonsoft Json](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm). You can generate some kind of JSON transform JSON and can generate resulted JSON.

This has been resolved by WorkMaze/JUST.net

https://github.com/WorkMaze/JUST.net

It uses transformation language like XSLT. It also includes functions present JUST documentation to convert values in desired output JSON.

I have tried one small example myself which I explained below.
They have given lots of examples and almost all cases they provided in their open-source Test/Unit Test.

Microsoft also has a projected JSON transformation library called JDT - Transform JSON File using a JSON transformation file.

https://github.com/microsoft/json-document-transforms

It has a Wiki page also here. https://github.com/microsoft/json-document-transforms/wiki
This solution is basically transforming configuration files like "appsettings.json" used in asp.net core projects and their approach is merging two files instead of just transform. for example if you have defined Source Json and Transform JSON according to JD specification, It will result in Merge of matched properties of transform and none matched properties as well. You have to mention delete specifically for each not matched or not needed property in transform.

4. one code example

Source JSON:

    {
        "Name": "Atul Patel",
        "Addresses": [
            {
                "Address1": "234 Brewer st",
                "Address2": "",
                "zipcode": "75072",
                "City": "Mckinney",
                "State": "Texas",
                "Country": "USA",
                "AddressType": "PrimaryHomeAddress"
            },
            {
                "Address1": "343 McCullam st",
                "Address2": "",
                "zip": "75845",
                "City": "Dallas",
                "State": "Texas",
                "Country": "USA",
                "AddressType": "OfficeAddress"
            }
        ],
        "DateOfBirth": "07/07/1980"
    }
    

Transform JSON:

    {
    "Name": "#valueof($.Name)",
    "Addresse": {
        "Address1": "#valueof($.Addresses[?(@.AddressType=='PrimaryHomeAddress')].Address1)",
        "Address2": "#valueof($.Addresses[?(@.AddressType=='PrimaryHomeAddress')].Address2)",
        "zip": "#valueof($.Addresses[?(@.AddressType=='PrimaryHomeAddress')].zipcode)",
        "City": "#valueof($.Addresses[?(@.AddressType=='PrimaryHomeAddress')].City)",
        "State": "#valueof($.Addresses[?(@.AddressType=='PrimaryHomeAddress')].State)",
        "Country": "#valueof($.Addresses[?(@.AddressType=='PrimaryHomeAddress')].Country)"
    },
    "BirthDate": "#valueof($.DateOfBirth)"
    }

Result JSON:

{
  "Name": "Atul Patel",
  "Addresse": {
    "Address1": "234 Brewer st",
    "Address2": "",
    "zip": "75072",
    "City": "Mckinney",
    "State": "Texas",
    "Country": "USA"
  },
  "BirthDate": "07/07/1980"
}

Conclusion:

Using JSON.net like a library can provide a cleaner approach of mapping different JSON formats by providing a JSON transform file. Anytime you have a new source, You don't need to do and code changes. Just provide a transformation file would be enough. This is satisfying one the SOLID principle (Open-Close principle).

References:


https://github.com/WorkMaze/JUST.net
https://www.codeproject.com/Articles/1187172/JUST-JSON-Under-Simple-Transformation
https://github.com/microsoft/json-document-transforms
https://github.com/microsoft/json-document-transforms/wiki
https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm

Tuesday, February 11, 2020

Learn SQL from zero to hero



One of my friends wanted to learn SQL server and queries. I provided him a few online books for basics.  This may be helpful for someone. 
If you want to learn queries than you must practice queries on your own. Microsoft's AdventureWorks is a really good sample database provided by Microsoft. Almost every online tutorial and videos use this database as an example. 

here is the URL to download the Adventureworks Sample database. It also explains how to install etc.



Database objects like Schema, tables are explained here


some business scenarios explained here with Query.

 
This is one of the great online free tools to practice Queries on the database. It has a query sample and you can see the output. It also has some assessment.






 Enjoy SQL queries.

In case of any questions then please ask in comments, I will try to reply to my best.

Friday, March 8, 2019

Import - export data using Azure blob storage and bcp utility


Skip to end of metadata
Example of the export data file, generated using BCP utility from a local SQL server, upload to Azure blob storage and import to Azure SQL server.
we will do it in the following steps :
  1. Export data from local SQL server 
  2. Create Azure blob storage
  3. Create an Azure blob storage container
  4. Upload exported files to blob storage
  5. Import data from uploaded azure blob storage data file

1. Export data from the local SQL server: 

Using BCP utility we can export.dat file and file format file.

bcp "SELECT Name, Color, Price, Size, Quantity, Data, Tags FROM Product" queryout product.dat -S (localdb)\MSSQLLocalDB -d MemEligibiltyTest -T
Here "product.dat"  is an output filename. 
-S  Sql server hostname
-d  Database Name
-T Trusted connection.
We need to provide a format for each field. bcp asks for each field like below.
Enter prefix-length of field Name 2: 8
Enter field terminator none: none
Enter the file storage type of field Color nvarchar: cr
Enter prefix-length of field Color 2: 8
Enter field terminator none: none
Enter the file storage type of field Price money: cr
Enter prefix-length of field Price 1: 8
Enter field terminator none: none
Enter the file storage type of field Size nvarchar: cr
Enter prefix-length of field Size 2: 8
Enter field terminator none: none
Enter the file storage type of field Quantity int-null: cr
Enter prefix-length of field Quantity 1: 8
Enter field terminator none: none
Enter the file storage type of field Data nvarchar: cr
Enter prefix-length of field Data 2: 8
Enter field terminator none: none
Enter the file storage type of field Tags nvarchar: cr
Enter prefix-length of field Tags 2: 8
Enter field terminator none:
Do you want to save this format information in a file? Y/n y
Host filename bcp.fmt: product.fmt
Starting copy...
4 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.) Total: 15 Average : (266.67 rows per sec.)

As I have provided prefix -length 8 for each field, generated files keeps 8 characters readable instead of binary character. You can see the rest of the characters are in binary format.

It creates the following two files
This creates two files product.dat and product.fmt. 1st is the data file and another is the format file.
product.dat looks like below.
format file.

2. Create Azure blob storage

we will use Powershell for azure connection.
Azure PowerShell works with PowerShell 5.1 or higher on Windows or PowerShell 6 on any platform. To check your PowerShell version, run the command:
$PSVersionTable.PSVersion
You can install Powershell Azure module using the following command
Install-Module -Name Az -AllowClobber

Connect to Azure with a browser sign in token
Connect-AzAccount
o/p:
Account SubscriptionName TenantId Environment
------- ---------------- -------- -----------
Atulxxxxxxx@xxxxxx.com ABCD - Development dxxxxxx3-xxxx-xxxx-xxxx-xxxxxxxxxxxx AzureCloud
As you logged in to Azure and it shows the default subscription. If you want to execute everything in another subscription then need to select that subscription.
You can get an Azure subscription by subscription id using the following command.
PS C:\Windows\system32> $context = Get-AzSubscription -subscriptionId xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
You can store output to a variable and use same variable to set Azure context to select particular azure subscription.
command:
PS C:\Windows\system32> set-AzContext $context
o/p:
Name Account SubscriptionName Environment TenantId
---- ------- ---------------- ----------- --------
ABCD  (xxxxxxxx-xxxx-xxx... Atul.Patel@xxxx... ABCD AzureCloud xxxxxxxx-xxxx-x...
Define a variable for the resource group.
$resourceGroupName="your-resource-group-name"
$blobStorageAccountName="yourblobstorageaccountname"
You can create a New Azure storage account using the following command. You can store azure account context to a variable.
PS C:\Windows\system32> $storageAccount=New-AzStorageAccount -ResourceGroup $resourceGroupName -Name $blobStorageAccountName -skuname Standard_LRS -Location WestUS2
o/p:
StorageAccountName ResourceGroupName Location SkuName Kind AccessTier CreationTime ProvisioningState
------------------ ----------------- -------- ------- ---- ---------- ------------ -----------
yourblobstorageaccountname your-resource-group-name westus2 StandardLRS Storage 3/5/2019 8:55:10 PM Succeeded
If  you already have a azure storage account then you can get it using following command.
PS C:\Windows\system32> $storageAccount=Get-AzStorageAccount -ResourceGroup $resourceGroupName -Name $blobStorageAccountName
PS C:\Windows\system32> $storageAccount
o/p:
StorageAccountName ResourceGroupName Location SkuName Kind AccessTier CreationTime ProvisioningState
------------------ ----------------- -------- ------- ---- ---------- ------------ -----------
yourblobstorageaccountname your-resource-group-name westus2 StandardLRS Storage 3/5/2019 8:55:10 PM Succeeded

3. Create an Azure blob storage container

Storing storage account context into variable
$ctx=$storageAccount.Context

-- Variable for Container name
$containerName="yourcontainername"

 Create new container using following command:
new-AzStorageContainer -Name $containerName -Context $ctx

Name PublicAccess LastModified
---- ------------ ------------
blogstoragename Off 3/5/2019 8:59:56 PM +00:00
context looks like below:

4. Upload exported files to blob storage

You can use the same Azure blob storage container context to upload files.
command:
set-AzStorageblobcontent -File "C:\Users\atul.patel\product.fmt" `
-Container $containerName `
-Blob "product.fmt" `
-Context $ctx

o/p:
Name BlobType Length ContentType LastModified AccessTier SnapshotTime IsDeleted
---- -------- ------ ----------- ------------ ---------- ------------ ---------
product.fmt BlockBlob 755 application/octet-stream 2019-03-05 21:37:54Z Unknown False

Another file.
set-AzStorageblobcontent -File "C:\Users\atul.patel\product.dat" `
-Container $containerName `
-Blob "product.dat" `
-Context $ctx
o/p:
Name BlobType Length ContentType LastModified AccessTier SnapshotTime IsDeleted
---- -------- ------ ----------- ------------ ---------- ------------ ---------
product.dat BlockBlob 456 application/octet-stream 2019-03-05 21:38:08Z Unknown False

Select list of upload files on Azure blob storage container.
PS C:\Windows\system32> Get-AzStorageBlob -Container $containerName -Context $ctx | select Name
Name
----
product.dat
product.fmt

5.Import data from uploaded azure blob storage data file


** Import data into Sql server db from azure blob storage

-- Import into DB
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'XXXXXXXXxxxxxxxxxxxxXXX'

--  Create credential with Azure Blob SAS

CREATE DATABASE SCOPED CREDENTIAL youcredentialname
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sv=2018-03-28&ss=bfqt&srt=sco&sp=xxxxxxxxxx&se=2019-03-06T23:30:14Z&st=2019-03-06T15:30:14Z&spr=https&sig=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';
-- NOTE: DO NOT PUT FIRST CHARACTER '?'' IN SECRET!!!

-- Create external data source
CREATE EXTERNAL DATA SOURCE blogstoragename
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://yourblobstorageaccountname.blob.core.windows.net/blogstoragename',
CREDENTIAL= youcredentialname --> CREDENTIAL is not required if a blob storage is public!
);

--Create table

CREATE TABLE dbo.Product(
Name nvarchar(50) NOT NULL,
Color nvarchar(15) NULL,
Price money NOT NULL,
Size nvarchar(5) NULL,
Quantity int NULL,
Data nvarchar(4000) NULL,
Tags nvarchar(4000) NULL
)

BULK INSERT Product
FROM 'product.dat'
WITH ( DATA_SOURCE = 'blogstoragename',
FORMATFILE='product.fmt',
FORMATFILE_DATA_SOURCE = 'blogstoragename',
TABLOCK);
--Just select
SELECT Name, Color, Price, Size, Quantity, Data, Tags
FROM OPENROWSET(BULK 'product.dat',
DATA_SOURCE = 'blogstoragename',
FORMATFILE='product.fmt',
FORMATFILE_DATA_SOURCE = 'blogstoragename') as products;


References:

Monday, January 21, 2019

SQL server JSON using practical scenario with .Net Core, C# and Dapper

Since SQL server 2016, come up with good support of JSON save.  You can find many syntactical tutorials online.  My point to add one more to get an idea where should we use it and where exactly it fits best.
We may have already tasted NoSql DBS like Mongo Db (big list from each cloud vendor).
If you are exposing your data via API as JSON then why not store as JSON.

Lets Jump in example




In my example, I have created a controller of Customer and CustomerContact. I have a service folder and each action has its own service class. I have used Dapper a lightweight ORM to interact with a database. This architecture is inspired by Jimmy Bogard's slice based architecture. I have left a URL to understand sliced based architecture below in the reference section.
In each service, the class has its own Query/Command model class and its own handler. 


Here are my basic model Customer and CustomerContact.


Customer and Customer Contact.

Note: code url is given below in the reference section.

Here we assumed Customer is basically a company. For each customer, It may have multiple CustomerContacts.  You can have different tables in the database for Customer and CustomerContacts and have a strong relationship between them with referential constraint.

For the current example, we can have both Customer and Customer contact in the same table as Object graph by having a CustomerContacts column as a JSON under the Customer table.




Infrastructure background used in the example.

1. Create Customer


Following is my post-action under the controller.


        [HttpPost]
        public async Task<int> Post(Customer customer)
        {
            return  await _createCustomer.Handler(customer);
        }

In the above controller, I have injected ICreateCustomer and assigned it to variable _createCustomer.


CreateCustomer has IDbConnection injected,  which is a SqlConnection instance created on a startup class of asp.net core as transient.
services.AddScoped(typeof(IDbConnection), x => ( new SqlConnection(Configuration.GetConnectionString("SQLJsonDBConnectionString"))));


Command:
We have a Handler method in CreateCustomer service. The handler can handler a command passed as an argument of the method. You can either create separate CreateCustomerCommand and map it to Model attached to a database and expose only those properties to Controller available in command.

Dapper:

Dapper is a lightweight ORM which extends the IDbConnection interface. In the above Create Customer example, We have used a QuerySingleAsync method which can execute SQL statement and return the result as a single value.
You can get more methods from here.:
Url:

In the above example, we have used a simple Insert Query with Output as newly created Id.

Serialize an object to JSON:
As we wanted to store CustomerContact as a JSON, we need to serialize it using JsonConvert.SerializeObject(Object).


Postman

Our CreateCustomer command looks like above from postman.

In a SQL server, it looks like this.


2. Get Customer


We have created a customer, now let's do get Customer.

Query Object:
As we tried to follow the SOLID principle, We will create another Service as GetCustomers and it has its own Query and Response Model.

     public class CustomerResponseModel
    {
        public int Id { get; set; }
        public string Name { get; set; }
        public bool Active { get; set; }
        public string CustomerContacts { get; set; }
    }
    public class Query
    {
        public int Id { get; set; }
    }

SQL:
We will have a simple Select query as a SQL statement with Id as a Parameter passed from the Query object.

Dapper:
QueryAsync with ResponseModel as a generic type parameter gives a list of CustomerResponseMdoel.
Here select fields and ResponseModel property Name should be matched.

Response:
To get response into CustomerContact model, We can do JsonConvert.Deserialize<ResponseModel>.


Postman

The client won't notice what conversion or way we have stored our data into the database. The response looks the same.


3. Create customer as dynamic Customer contact.

In the above example, We have seen that we can Insert data as JSON when we have fixed CustomerContact object properties.  Another advantage of having a JSON format in the database is, we can add any number of properties regardless of any fixed schema.
JSON format data stored is a schema-less data store.


CreateCustomerCommand:
CreateCustomerCommand is just looking like a Customer object only one property for CustomerContacts as dynamic.

SQL: SQL is the same as we did for fixed Schema.

Dapper: QuerySingleAsync with the int return type. Only we have to Serialize a dynamic object to JSON before save it.


Postman: While posting CreateCustomer, you can add any required new properties regardless of any schema.

Disadvantage:  In this approach, Schema normally controlled by UI which is not a good thing. If it exposed without security, it allows any junk data. You have to be careful while implementing this approach.
I still recommend that even if you have a requirement of having dynamic properties, then also have some schema at Model level. You still don't have to worry about schema changes in the database for newly added fields.


4. Get customer as dynamic


Similar way you can get Customer with dynamic CustomerContacts.

with Similar SQL statement, you just need to deserialize JSON into a dynamic object using. JsonConvert.Deserialize<dynamic>.

This is a little safer in case you have a dynamic object with additional properties.


5.  Add Customer contact


We have Customer and we saved CustomerContacts as a JSON array object int SQL. If we have to add one more element into the same JSON array then you can use JSON_Modify() function.

CreateCustomerContactCommand:
This can be command class having properties of CustomerId and CustomerContact.



SQL:
 var sql = @"UPDATE [dbo].[Customer]
                            SET [CustomerContacts] = JSON_MODIFY(CustomerContacts,'append$',JSON_QUERY(@CustomerContact,'$'))
WHERE Id=@CustomerId";

JSON_MODIFY command can add one more element to existing JSON and update the same row.
JSON_QUERY is useful to insert pure JSON object instead of JSON string with "\" escape sequence injected.
More technical details of the function explained here.

https://docs.microsoft.com/en-us/sql/t-sql/functions/json-modify-transact-sql?view=sql-server-2017
https://docs.microsoft.com/en-us/sql/t-sql/functions/json-query-transact-sql?view=sql-server-2017

If  JSON object inserted without JSON_Query then it looks like below highlighted.


Postman:



6.  Get customer contacts


If you wanted to get only Customer contact then you need to use OPENJSON function of SQL server.

GetCustomerContactQuery:
The query has all the parameter you wanted to pass. In our case CustomerId is important.



SQL:


var sql = @"SELECT js.*
                        FROM Customer c
                        CROSS APPLY OPENJSON(c.[CustomerContacts], '$') WITH (
                        Id uniqueidentifier
                        ,Title VARCHAR(10)
                        ,FirstName VARCHAR(50)
                        ,MiddleName VARCHAR(50)
                        ,LastName VARCHAR(50)
                        ,CompanyName VARCHAR(50)
                        ,SalesPerson VARCHAR(100)
                        ,EmailAddress VARCHAR(100)
                        ,Phone VARCHAR(20)
                        ) AS js
                        WHERE c.id = @CustomerId";

You have to Cross apply with OPENJSON on JSON column WIth all columns with datatype and length. Column name should be matched with JSON properties including case.

OPENJSON explained technically here in detail:




JSON result set looks like above when getting each property individually.



If you don't want to use individual properties and may have dynamic schemaless properties then you can use OPENJSON() function without With.

Advantages:

1. Easy storage for Object graph in a single table.
2. Easy to add a schemaless column in SQL server and having NoSql DB features for a smaller portion of an application. No need to switch database.
3. Easy to get records and insert record as JSON and as fixed schema model or dynamic model.

Disadvantage.

1. This feature can not completely replace the NoSql database.
2. If dynamic fields have to rely on schema from UI then it is not good for security reasons.
3. It doesn't have good support of function when you wanted to add a new field in a fixed schema JSON array.
4. In case if you have added big JSON array for a row then it may be a risk of information loss on by mistake edit by user and dba. Its a really good idea to add a temporal table feature to keep all the history of the table in this particular case.

In case you have a column with an array of  JSON and you wanted to add one field of each object of an array then you have to do either string operation or can do approach explained by me on following StackOverflow response given by me.

Example Code: 
https://github.com/atulpatel/SQL_Json

Ref:
https://stackoverflow.com/questions/49828897/update-an-existing-json-value-inside-a-json-array-in-sql
https://docs.microsoft.com/en-us/sql/t-sql/functions/json-modify-transact-sql?view=sql-server-2017
https://docs.microsoft.com/en-us/sql/t-sql/functions/json-query-transact-sql?view=sql-server-2017

Dapper:

Sunday, May 20, 2018

Customize UI design for multi tenant asp.net application using Bootstrap

When you are developing a multitenant application, all your tenant (customer) wants their design or color coding (theme) different as per their brand. First, you have to come up with a unique design or layout of your web app which can be neutral and can be customizable to any different design easily.  By reading this requirement, of course, you get Bootstrap in your mind. By using Bootstrap you can easily make your web app design very flexible and responsive. We all know benefits of using Bootstrap. If you don't then go to Getbootstrap.com and get it.

Bootstrap can be base for responsive web applications. You need to do custom CSS by overriding existing bootstrap classes. You can also make your customization of bootstrap common for all tenant by using .less provided by bootstrap instead directly using .css.

I have added my example of design multitenant web application design using bootstrap.

For our implementation (our example), we considered bootstrap V3.3.7 as  We still may have a learning curve of understanding of new version of the framework. We consider using new bootstrap is a little bit a risk for a team. You can use a new version of bootstrap and apply the same strategy for handling multiple tenants web design customization. We have used ASP.net MVC core project using visual studio.

Implementation of custom design has done in following steps.

1. Need to install basic bootstrap with .less files.

you can download bootstrap .less files from the following URL.

https://getbootstrap.com/docs/3.3/getting-started/

Also, you can implement packages like npm or nuget.

It is also described here how to install grunt and grunt commands here

https://getbootstrap.com/docs/3.3/getting-started/

2. Define your own customization on top of bootstrap using your own .less files.

As per Asp.net core MVC project folder structure, I have added bootstrap .less file under lib/bootstrap.


I have added a folder called "less" under the root folder.  Added main.less file. This can be your application main .less file. You can add reference of bootstrap.less under this. The bootstrap.less file you can locate under lib/bootstrap/less.  Bootstrap.less is the main file of bootstrap which has all reference to other bootstraps .less files and bootstrap variables. bootstrap.less looks like below.



main.less file has bootstrap.less reference.




Check bootstrap variable.less file. Bootstrap has defined all variable for all colors and size. You can directly change this variable values.  By Including bootstrap.less into main.less you can use these variables to your custom .css classes too.  You can also override values variables if you wanted to keep your custom.less file separate for each tenant.

3. Install Grunt and packages for .less.

You can use Gulp or Grunt as a task runner for generating .css file from .less files. You can do other tasks also like minification and bundling using it.  For our example, we have used Grunt task runner. 

You can find the detailed article about how to use Grunt with asp.net core here and for Gulp here.

You need Nodejs installed on your machine.  Install nodejs with Node Package Manager (npm).

You can install Grunt using npm by the following command.

npm install -g grunt-cli

For our project, we need to add npm Configuration File (package.json). 




For manual install Grunt to your project Open a command prompt and change your directory to your project directory.

cd c:\{your project directory}

If you have added nodejs to PATH environment variable then only has to apply npm command directly otherwise you have to give full path of npm exe like below and run install grunt.


C:\{your project directory}> "c:\Program Files\nodejs"\npm install grunt --save-dev

Install packages for .less

C:\{your project directory}> "c:\Program Files\nodejs"\npm install grunt-contrib-less --save-dev

o/p: 
+ grunt-contrib-less@1.4.1
added 62 packages in 8.574s

Using visual studio, You can simply update package.json file and update packages from Dependencies -> Righ click  on "npm" => Click on "Restore packages"

"Package.json"
















"Restore Package"

It will look like following after restoring package


















Add Gruntfile.js to a solution. Get more information from here  https://gruntjs.com/getting-started

sample grunt file looks like below.

4. Generate different stylesheet files according to custom variables using Grunt.

Using Grunt task runner you can generate .css file from .less file. We already have installed required package of .less.
we need to add grunt task for .less with different parameter options.  you can find the whole list of parameters and guideline of adding grunt task for less here.   https://www.npmjs.com/package/grunt-contrib-less

We are going to use "modifyVars" parameter to modify .less variable value. using this parameter, you can pass the value of any .less parameter like bootstrap @brand-primary.
With "files" parameter you can define Source .less file and destination .css file (result file).

In our case, we have our main.less file which has a reference of bootstrap.less along with custom CSS classes. We can define different task according to the environment. Also, we can define different task according to a tenant.



Like above gruntfile,  we have defined the task to generate CSS for the development environment and production environment.
For development, you need to check .css classes sometimes so no need to do compression. while production environment you can define compress: true. It minifies the .css.
We defined the value of 'brand-primary' variable under modifyVars for both environments.

Under the "files" tag, You can define source file as .less file and result file .css file like in above example we have following.

files: { 'wwwroot/css/site.debug.css': 'wwwroot/less/main.less'}

Grunt task generates site.debug.css file in defined path using main.less file.

Similar way you can add another task for a different tenant.  you can pass variable value according to customer requirement.  You can also generate different variable. less file with customer specific values and generate .css using different .less files according to tenants.
Tenant task is shown below.



5. Add tenant code switch.

You can change the use of .css file according to your tenant id. You can manage it by _layout.cshtml.
After login, you can get tenant name programmatically.  You can implement base controller and take one global variable which can be used for all views. I would let you decide how would you get your tenant name as I wanted to focus on switching stylesheet according to different tenant name.

In following code sample, I have created one variable called tenantcode which has tenant name. This variable can get value dynamically from server login according to who is logged.  I have used tenantcode to generate CSS name for that particular tenant.

In Asp.net MVC core, You can define the environment for development and production. You also can define your custom one. You can get more info about environment tag from this URLhttps://docs.microsoft.com/en-us/aspnet/core/mvc/views/tag-helpers/built-in/environment-tag-helper?view=aspnetcore-2.1

So as per our logic, it generates tenant CSS URL like "~/css/site_tenant1.css". It would be just site_.debug.css for development.


By changing tenantcode, you can just change the theme of your application.

Tenant1 theme
 Development theme

The code sample is here https://github.com/atulpatel/MvcCoreLessCss.

If you want anything to elaborate more then let me know in the comment.

Friday, May 11, 2018

How to close your code


We all know them  SOLID principles.  Every developer knows what are SOLID principles because every developer has to go through an interview process to get a job. Some can explain really good while interview.

Recently, I have been interviewed by a very reputed financial company.  They have 4 rounds of interview and almost 3 round has asked a question about SOLID principles. They all have asked very good interesting questions and it seems they were looking for some specific expertise. I have started working there and their code base is awesome. They are not working anywhere near what they asked in an interview. I also found some controllers code files has 2-3k  lines of code which is a presentation layer. Though they have very nice onion architecture for the project. I hope whoever worked there all gone through those interview process and know about at least SOLID principles. These principles are not just to perform in the interview. We need to apply it in real code. Maybe they have their own reasons for not following them.

Anyway, here I wanted to show some example which I saw in their code base which can very easily make it closed for modification.

Check following code.

       
      public void Work(string switchoption)
        {
            switch (switchoption)
            {
                case "Fun1":
                    ///
                    /// So many lines of code
                    ///
                    break;
                case "Fun2":
                    ///
                    /// So many lines of code
                    ///
                    break;
                case "Fun3":
                    ///
                    /// So many lines of code
                    ///
                    break;
            }
        }



If you come across this kind of code or you may end up writing this kind of code then what can be at least done, I have explained it here.  Above code is not closed because you don't know what kind of another case may have to add in future. What if there is a change in code for a particular switch case. You have to test all cases because you have changed it the function. If your unit test covered all switch cases then you are a little bit good position. QA has to check every case manually. If something missed in testing then you may end up in production bugs. Overall all these increases the maintenance cost.

I do refactor it to the following levels.

At least have a private function for long lines of code for each case.



 public void Work(string switchoption)
        {
            switch (switchoption)
            {
                case "Fun1":
                    function1();
                    break;
                case "Fun2":
                    function2();
                    break;
                case "Fun3":
                    function3();
                    break;
            }
        }

        private void function3()
        {
            Console.WriteLine("Function 3");
        }

        private void function2()
        {
            Console.WriteLine("Function 2");
        }

        private void function1()
        {
            Console.WriteLine("Function 1");
        }



If you create private function then the code will at least increase readability.  By giving a proper name to function,  it describes that what that piece of code is doing. But still, it has an issue of what if we may need to write more cases there. This code is still not a closed code.

Instead of Switch case, you can implement Dictionary.

    
        public void Work2(string switchoption)
        {
            Dictionary<string, Action> switchfunction = new Dictionary<string, Action>();
            switchfunction.Add("Fun1", function1);
            switchfunction.Add("Fun2", function2);
            switchfunction.Add("Fun3", function3);

            switchfunction[switchoption].Invoke();
        }


You can simply add a dictionary with string and Action type. Add all functions to dictionary collection.  Dictionary can pick a particular case by Key name and execute the function by invoking it. You can also generate this dictionary list from outside of this function or get dictionary values from any data store.  In case in future, you may need to add more cases then you don't have to change this function.  This way your existing function Work2 here in the example can be closed for modification.

This implementation has resolved our problem of closing the function "Work" for modification.  But we still have to add a new function to the class for adding new cases. It is also not following Single responsibility principle.

Implement Strategy pattern using Dependency Injection

Basic strategy pattern implementation explained here. http://www.dofactory.com/net/strategy-design-pattern


The basic concept of pattern defines an Interface with strategy and have the concrete classes with different strategies implementation. Based on your need you can create an object of particular strategy dynamically.  google it for  "strategy design pattern"  and you will get plenty of examples.

Here I wanted to show an example using dependency framework. In above-given URL of dofactory example of strategy pattern, has context class to create an object of different strategies. We can rely on dependency injection framework for that responsibility.

In the following code,  IExampleStrategy is an interface and has functionexample(). This interface implemented in three different ExampleStrategy classes. each class has different implementation of functionexample.


  
 public interface IExampleStrategy
    {
        void functionexample();
    }


    public class ExampleStrategy1 : IExampleStrategy
    {
        public void functionexample()
        {
            Console.WriteLine("ExampleStrategy1.functionexample()  executed.");
        }

    }

    public class ExampleStrategy2 : IExampleStrategy
    {
        public void functionexample()
        {
            Console.WriteLine("ExampleStrategy2.functionexample()  executed.");
        }

    }

    public class ExampleStrategy3 : IExampleStrategy
    {
        public void functionexample()
        {
            Console.WriteLine("ExampleStrategy3.functionexample()  executed.");
        }

    }



Following is the context class where we instantiate the class based on strategy.  In the following class, we have injected IUnityContainer. It resolves right Strategy class based on a parameter.
The Work function is the function that we are changing to our main problem of removing switch cases. The first Work function can pass parameter and decide what strategy need to execute. The container can get a right concrete class based on a parameter passed. Instantiating an object is a responsibility of container based on a parameter.

Another Work function also does the same thing but in some cases, the developer has to do things in a loop. The container can take care of creating an object and also can take care of lifetime of your object. So in case if you have to create the same object multiple times for our second work function then a container can reuse the same object created previously.

  public class CloseCodeExample2
    {
        private readonly IUnityContainer _container;

        public CloseCodeExample2(IUnityContainer container)
        {
            _container = container;
        }

        public void Work(string switchcall)
        {
            var examplefunction= _container.Resolve<IExampleStrategy>(switchcall);

            examplefunction.functionexample();
        }

        public void Work(List<string> switchcalls)
        {
            foreach (var item in switchcalls)
            {
                var examplefunction = _container.Resolve<IExampleStrategy>(item);

                examplefunction.functionexample();
            }
            
        }
    }




Here is our Main function where we have defined our Unity container registration for an interface and its implementation.

   class Program
    {
        static void Main(string[] args)
        {
            CloseCodeExample codeExample = new CloseCodeExample();

            codeExample.Work("Fun1");

            codeExample.Work2("Fun2");
            codeExample.Work2("Fun3");

            //Unity container registration.
            var container = new UnityContainer();

            container.RegisterType<IExampleStrategy, ExampleStrategy1>("Fun1");
            container.RegisterType<IExampleStrategy, ExampleStrategy2>("Fun2");
            container.RegisterType<IExampleStrategy, ExampleStrategy3>("Fun3");


            CloseCodeExample2 codeExample2 = new CloseCodeExample2(container);
                        
            codeExample2.Work("Fun1");
            codeExample2.Work("Fun2");

            Console.WriteLine("---------------------------------");

            codeExample2.Work(new List<string>() { "Fun2", "Fun3", "Fun1" });

            Console.ReadKey();

        }
    }



Let's validate our two SOLID principles that we wanted to implement.  In above code, If you want to add new case then you need to add another class which implements the same interface and just need to register for the container.  We don't need to change function "Work"  It is basically closed for modification. Also, by creating a separate class for each case, We have implemented single responsibility principle. If you implement Unit test then it would be a separate unit test for all classes. Easy to implement the unit test.

Note: In above example, you still have to consider little bit defensive code, for example, what if no any class implementation available to unity config. It will throw an exception of  "Unity.Exceptions.ResolutionFailedException". So you still have to take care of this kind of things.

I hope you like it. If you have any query, add it to comments.

You can find code sample of above code here at GitHub. https://github.com/atulpatel/AspCoreRnd