Quantcast
Channel: SCN : Blog List - Java Development
Viewing all 50 articles
Browse latest View live

JSP(Java Server Pages), Servlets, JSF(Java Server Faces), Web Service(WSDL), MySql, MongoDB (PART 2)

$
0
0

 

This tutorial present a project created with the above mentioned technologies. It shows the double independence of the database which was obtained through a DB Layer. We have the independence of the SQL, or NON-SQL Database, but also the independence of what SQL Database we want to use, also the independence of the ORM (Object Relational Mapping). It is also created with newest technologies like JSF, Prime Faces Components.


Build Simple Android App Development Tutorial

$
0
0

 

This tutorial present how you can build a simple android application to navigate from a layout to another layout and also how to open an Web Page.

Step by step to configure JAD to decompile source code of class file

$
0
0


In our JAVA development if there is no source code attached for a class like below,

clipboard1.png

We have no chance to view its source code.

clipboard2.png

However you can use an open source tool, JAD, to decompile the class file so that you can view its source code.


There is also an available plugin for eclipse which you can download from this link.


1. Download the proper Jad Eclipse plugin according to the version of your Eclipse:

clipboard6.png

2. Download the proper JAD.exe file according to your OS type:

clipboard7.png

clipboard8.png

3. put the JAD plugin to your Eclipse plugin folder:

clipboard9.png

4. Put the JAD.exe to the bin folder of your JRE installation:

clipboard10.png

5. Restart your Eclipse, in menu Windows->Preference->Java, you can find there is a new option JadClipse, maintain the path of your JAD.exe to "Path to decompiler":

clipboard11.png

6. Now click F3 on the class which you would like to view its source code, the JAD will decompile it for you:

clipboard12.png



A small tip of Eclipse auto completion setting

$
0
0

Today my colleague tells me with one tip regarding Eclipse auto completion setting, which I think is useful in my daily life.

 

 

There are lots of standard classes and those class/method created on my own, however when I type some characters there is no auto completion dropdown list. For example I have already created one method named "consumeABAPFM", however even after I type "consum", there is still not any auto completion function provided.

clipboard1.png

However, by choosing "Windows->Preference->Java->Editor->Content Assistant", we can achieve what we expect.

 

 

Maintain "Auto activation triggers for JAVA" with value ".ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz".

clipboard2.png


After that whenever we type some characters the auto completion dropdown list will be there and refreshed with 200 ms.

clipboard3.png

Is Eclipse SLOWING You Down?

$
0
0

Hi All,

 

Just goto Windows-> Preferences and check the ‘Show Heap Status’ .

 

GarbageCollector.jpg

 

And press The Garbage Collector Button. Here's the before and after results.

 

Voila! ! !

 

Thanks Nick.

A Mobile PrimeFaces example based on MyFaces 2.0 running on NW 7.31

$
0
0

Hi everybody,

 

after changing our netweaver runtime platform from 7.0 to 7.31 , I spent a lot of time, trying to migrate some of our TomCat 7.0-based web-application to SAP Netweaver 7.31.

 

The most of these applications are using popular components/techniques like MyFaces 2.0 (JSF 2.0), Tomahawk, PrimeFaces, Mobile PrimeFaces, Trinidad, CDI, JPA....

 

Because of the fact that NW 7.31 still supports EE 5.0 (not EE 6.0), in most cases the SAP JSF 1.2 implemenation was insufficient for our needs.

 

After reading lots of the documents in SDN about "heavy ressource loading", I found a way to separate the MyFaces (JSF) 2.0 libraries into a library project, so that I could use the Mobile PrimeFaces components in my dynamic web projects, referencing the MyFaces 2.0 implementations via a hard class loader reference.

 

    <!-- JSF 2.0:  MyFaces 2.0.x-Implementierung -->

    <reference reference-type="hard" prepend="true">

        <reference-target provider-name="oevbs.de"

            target-type="library">myfaces~2.0.x</reference-target>

    </reference>

 

The most difficult part was to decide which jars to place in the MyFaces 2.0 library project to avoid "class not found"-runtime exceptions. But I still found a solution that fulfilled our requirements.

 

Because of the fact that there still is a big interest in using popular java techniques that exceed the possibilties of the EE 5.0 implementation on NW 7.3, I decided to post my solution in SDN.

 

The example project is a skeleton of an existing project we use in our insurance ("Elektronische Versicherungsbestätigung"). (Sorry, for the field names and comments in the project example beeing in german language).

 

If you like, you'll find all sources of the example project in the Code Exchange Repository: "https://code.sdn.sap.com/svn/MobilePrimeFacesExample/"

 

Important Notice:

  • For entering "Code Exchange", you have to be a registered user!
  • To exclude conflicts with the different licence agreements, I removed the "open source" jars from the project!

 

 

If you just want to see the running web-application:

 

  1. log in to SCN (http://scn.sap.com/community/code-exchange)
  2. download the enterprise-application-archiv "eVBMobilExampleEAR.ear" ("https://code.sdn.sap.com/svn/MobilePrimeFacesExample/eVBMobilExampleEAR/eVBMobilExampleEAR.ear")
  3. download the needed jars (See: "https://code.sdn.sap.com/svn/MobilePrimeFacesExample/eVBMobilExample/WebContent/WEB-INF/Needed_Libraries.txt" for the download-locations!)
  4. place the jars into the zipped "eVBMobilExampleEAR.ear"-file below the following lib-folder:One.jpgTwo.jpg
  5. download the library-sda-file "MyFaces20x.sda" (See: "https://code.sdn.sap.com/svn/MobilePrimeFacesExample/MyFaces20x/MyFaces20x.sda")
  6. download the needed jars (See: "https://code.sdn.sap.com/svn/MobilePrimeFacesExample/MyFaces20x/Needed_Libraries.txt" for the download-locations!)
  7. place the jars into the zipped "MyFaces20x.sda"-file below the following lib-folder (Notice: The names of the jars must exactly match to the listed names in the provider.xml) :Three.jpg
  8. deploy the standard library file: "MyFaces20x.sda" and accordingly the Enterprise Archiv: "eVBMobilExampleEAR.ear"

 

Call the web-application using the URL: "http://{Your Hostname:Port}/eVBMobilExample".

 

If everything works fine, you should see the following dialog-pages:DialogStep1.jpgDialogStep2.jpgDialogStep3.jpgDialogStep4.jpg

 

Regards

Steffen

Proxy Class to access SAP Functions: JCo and Enterprise Connector

$
0
0

Hello All,

 

This blog is a compilation of quick steps to develop proxy classes to access the SAP functions(remote-enabled) from non-SAP UI technologies.

 

[ Inspired by the forum question : http://scn.sap.com/thread/3404002 ]

 

The steps to be followed are:

 

1. Install NWDS on your local machine.

 

     NWDS is available in SDN. For complete list of NWDS versions use the link: https://nwds.sap.com/swdc/downloads/updates/netweaver/nwds/ce/ (It requires an S-User ID. Contact any SAP developer for an S-User ID).

 

2. Then, create a Java Project.

JcoConnector_1.PNG

 

3. Complete the Project creation. Right-click on project and choose New->Other. In the window select " Enterprise Connector ".

JcoConnector_2.PNG

 

4. Click Next. Then, fill the necessary fields.

JcoConnector_3.PNG

5. Provide the SAP System details and SAP login credentials to access the remote-enabled SAP functions.

JcoConnector_4.PNG

 

6. On successful login, search for the function to be used by your application i.e. function for which the proxy classes are required.

 

JcoConnector_5.PNG

 

7. Finally, the proxy classes are generated.

 

However, since the JCo library files/apis are not available in the Java project, it results in compile errors for all SAP JCo classes in the project.

 

The following steps will assist you on add the JCo related jar files to the Java project.

 

1. Go to the build path of the Java project. Choose, External Jars and navigate to the plugins folder of the NWDS.

 

2. Look up the plugins for the following jar files:

 

  • SAPmdi.jar
  • sapjco.jar
  • aii_util_msc.jar
  • aii_proxy_rt.jar

 

JcoConnector_11.PNG

3. Click on OK. Now, the java project and proxy classes are ready to be used in your application.

 

All the Best.

 

Regards,

Sharath

Managing SAP JARs with a repository manager

$
0
0

In software development, a general rule is to not reinvent the wheel. This meas: If there is already a library available, that does the job, use that library. Without this thinking, even the simplest projects would consume too much time in solving already solved problems. A drawback is that the project will depend on a lot of external libraries. Managing these library dependencies is important. Java projects are no exception to that. For instance, there are many standard classes available from Apache that everyone uses, but are not part of core Java.

 

In the Java world, two main tools help to take care of this: Maven and Ivy. Both allow to define the dependencies for each project and download them from a central repository. There are public accessible Maven repositories available and while it is easy to find there the needed library, it raises a simple question: is this the right way for your project to do?

 

Once published there, the library can be downloaded from anyone. Most probably, this is not what you want for your internal project. When you have several developers, makes it sense that each one is accessing the outside repository and downloads hundreds of MB when resolving the library devependencies? When a developer creates a new internal JAR, how to distribute it to the other developers? For code quality analysis, how to get the jars needed for a binary analysis?


There are solutions available for internal repository managers that also act as a proxy for public repositories. The most famous ones are:

  • Nexus
  • Artifactory

 

Installation, configuration and usage of both is simple. In case the Java project is already using maven or ivy to resolve dependencies, it is only changing the name of the repository server to the internal one. In case a project depends on an external library, the repository manager connects the public maven server and downloads them. As all developers are using the internal server, caches help to minimize traffic and reduce overall time needed to resolve a dependency. Libraries created by the Java projects can be uploaded to the internal repository manager and are automatically accessible by other developers. Maven and Ivy take care of this.

 

What about SAP Java jars? How to get these and publish to Nexus / Artifactory? SAP isn’t making them available in a public repository manager. There is also no other way given by SAP to import the jars. What you get are the SCs and DCs. NWDI is the repository manager for SAP Java, but when you use continuous integration, NWDI is not the best option.

 

Using NWDS and NWDI it is possible to sync all DCs to a local computer. This sync process creates a folder structure and copies the associated files, including the jar files. DCs have a public part concept, and each defined public part resolves to a jar file.

 

To get the jars, open the Development Infrastructure perspective in NWDS and select the DC to get the jar from and sync it.

rm1.jpg

Where are the files stored? In the workspace directory for NWDI projects. In my case, I have a workspace named test and only 1 NWDI connection (resolved to 0): <path>\test.jdi\0\DCs\sap.com

 

If you do this for all of SAP`s DCs, the directory will look like

rm2.jpg

The jar files are stored depending on their DC name.

rm3.jpg

Tc/je/usermanagement/api resolves to: <path>\test.jdi\0\DCs\sap.com\tc\je\usermanagement\api\. The jar file is stored in the standard directory, that is: _comp\gen\default\public\api\lib\java

rm4.jpg

This is the jar that needs to be uploaded to your repository manager server. This can be done manually or automated by a script. I developed a small script for that that also replaces the ~ with a . and that I can run from Jenkins.

rm5.jpg

In Artifactory, the artifact is shown with additional meta data, like age, size and download statistics.

rm6.jpg

Searching for a class is possible as well as browsing the content of the jar to see the individual class files.

rm7.jpg




Java Connection Pool - Design and Sample Implementation

$
0
0

Java Connection Pool - Design and Sample Implementation

                                                                                                    -  Aavishkar Bharara

Introduction

 

In the world of application designing, we often encounter the need to fetch, update or delete objects stored in the database. To enable this, we would require a database connection object. Opening and closing the database connection object can be very costly as it involves, creating the network connection to the database followed by doing the authentication using the username and password. Hence, database connection is a scarce resource and need to be utilized efficiently and effectively.

 

 

Connection Pooling

The connection pooling helps to utilize and efficiently re-use the database connection in a multithreading environment. The idea is to share the database connections (limited connections) with the users (unlimited numbers) in the most efficient way.

 

 

Design

Constraints

There are multiple design options for creating an efficient and re-usable connection pool application programming interface.

The design constraints include:-

  • A singleton "Connection Pool" class
  • An option to "request" for the connection from the connection pool.
  • An option to "release" the connection to the connection pool.

Apart from these core design constraints, there are some administration requirements as well, which includes:-

  • Tracking of "who" and "when" was the connection requested. This helps to track "connection leaks" in the code and take corrective steps accordingly.
  • Tracking of "lost" connections or "idle" connections which were not returned back to the connection pool and "claim" them back to the connection pool.

Design Options

These requirements are very similar to many frameworks already available in the market.Microsoft COM/DCOM provides a common interface called as IKNOWN interface to accomplish the similar set of requirements.  ...see link for more details

 

Java has the similar requirements for the "Garbage Collector" usage to free up unused java objects....see link for more details


Implementation

Interface IConnectionPool

An interface IConnectionPool provides the basic methods of the connection pool class.


IConnectionPool.jpg

Interface : IConnectionPoolAdmin

 

This provides the administrative functions for the connection pool class.



Connection Container Class

The implementation starts with creating a container for the "connection object" first.


 

This connection container class stores the attributes of the connection like,

  • Connection is in use or idle
  • Who and when accessed the connection


Connection Class

The connection class implements the IConnectionPool interface to provide the basic features of connection pool. It also implements the IConnectionPoolAdmin to provide the administrative functions to the connection pool implementation.

 

        public class ConnectionPool implements IConnectionPool, IConnectionPoolAdmin {

 

This implements the "singleton" feature in the Connnection via the "getInstance()" function. The function getConnection() has the complete logic built to get "idle connection" and set the required attributes to the connection object so that it is "marked" as currently "in-use".

A Sample Implemented Code

 

Attached is the complete implemented code using the jar file AviConPool.jar which you can include in your project and start re-using it. The following set of methods needs to be called to utilize this jar file.

 

First (and only once) Call should be,

 

      ConnectionPool.getInstance().setup(
                    String DBDriver, String DBURL, String username, String password, int maxConnections);

This needs to be called from the application central component for once.


Subsequently, the calls to the connection pool would be something like this,

 

public void functionABC

{


  Connection con = ConnectionPool.getInstance().getConnection();


  try {

  <<< Write Your Code Here >>>

                       

                        Statement stmt = con.createStatement();

 

  ResultSet rs = stmt

  .executeQuery("select ......");


  if (rs.next()) {

  // Read Data

  }

 

  // Help GC

  rs.close();

  stmt.close();

  rs = null;

  stmt = null;

  } catch (Exception ex) {

  // Log Errors

  }

  finally

  {

  ConnectionPool.getInstance().releaseConnection(con);

  }

}

 



Also, attaching the jar file [ConPool.txt which can be renamed to ConPool.jar] which you can use in building your applications.

Upload large files to NetWeaver Java

$
0
0

The reason behind this blog can be found here: [1].

 

This blog is not about writing a WDJ application for uploading a file, as there is more than enough information on SCN available explaining how to do this. It is about how to configure ICM to allow large file uploads (>100 MB) and what this implies. The configuration is valid for NetWeaver Java >= 7.10 as these releases use ICM.

 

A limitation of the maximum file size accepted by ICM is given by standard configuration. By default, ICM only accepts files not larger than 100 MB. As always, the documentation is not explaining why this is, it is just given by almighty SAP [2]:

“The default setting for the size of HTTP requests is a maximum of 100 megabytes”

 

The parameter: icm/<PROT>/max_request_size_KB


The see the current configuration of this parameter, you can use SAP MMC (you secured port 50013 to ensure not everyone can see anything of your portal, including the logs, right?).

 

largefiles1.jpg

largefiles2.jpg

This parameter needs to be changed in the ICM configuration file located on the server. As it is a default parameter, you`ll have to create the parameter first to overwrite the default value. After a restart of ICM, the change is effective and you can upload GB large files. Now, this parameter configured ICM and therefore generally valid. In case your portal is accessible by external users, you should use a reverse proxy to make sure people from outside try to crash your server by sending automatically large files to crash your server.

 

It would be nice to not find out the possible parameters using SAP Help when editing a configuration file. This is just counterproductive. Why not put all the parameters there, including a description? That’s how most (ex: open source software) configuration files are done. The file serves as documentation.

 

Example application to try it out

 

The limitation is valid for every HTTP request send to AS Java, so if you want you can try it out with a portal application, servlet, KM file upload or a WDJ application. The example here is a WDJ application. You can write your own application using the examples provided by SAP here on SCN. In case you do not want to search, here is a short description on how to code the application.

 

1. Create a view named FileUpload

largefiles3.jpg

largefiles4.jpg

2. Node element uploadFile

largefiles5.jpg

 

3. Uploading the file is triggered by the action UploadFile

 

public void onActionUploadFile(com.sap.tc.webdynpro.progmodel.api.IWDCustomEvent wdEvent )
{  IPrivateFileUploadView.IContextElement element = wdContext.currentContextElement();  IWDResource resource = element.getUploadFile();  if (element.getUploadFile() != null) {  // do something
} else {}

 

 

This is all it needs to have a WDJ application that allows to upload a file. Note that the node element handling the file is of type Resource. During the upload, WDJ creates a temporary file on the server that will be stored there until the resource is not any longer used.

 

 

Appendix

 

[1] http://scn.sap.com/thread/3278398

[2] SAP Help

 

Some information from SAP Help and SCN about WDJ and file upload / download

 

Title

Link

Release

Uploading and Downloading Files in Web Dynpro

http://help.sap.com/saphelp_nw04/helpdata/en/43/85b27dc9af2679e10000000a1553f7/content.htm

7.0

Web Dynpro Java Demo Toolkit

http://wiki.sdn.sap.com/wiki/display/WDJava/Web+Dynpro+for+Java+Demo+Kit

>= 7.0

How to Upload and Download Files in an Web Dynpro for Java Application

http://scn.sap.com/docs/DOC-2601

>= 7.11

SAP Portal KM: Create a resouce

http://help.sap.com/saphelp_nw2004s/helpdata/en/43/68a36ba327619ae10000000a1553f6/frameset.htm

7.0

Loading the InputStream at FileDownload on Demand

http://help.sap.com/saphelp_nw73/helpdata/de/42/f6ec5a09321bc7e10000000a11466f/frameset.htm

7.3

Uploading and Downloading Files in Web Dynpro Java

http://scn.sap.com/docs/DOC-2561

7.0

Implementation of Logging and Tracing

$
0
0
The purpose of this document is to provide best practices in order to create useful Logging and Tracing messages.

1.    Definitions

1.1  Logs

Logs are targeting administrators when they encounter some problem. Therefore logs should describe the problem in an easily understandable form. Avoid using internal terms and abbreviations in log messages. Checking component versions
Example:
The network connection has been broken
Use com.sap.tc.logging.Category to log information, warnings or errors to an administrator of a customer system.

1.2  Trace Messages

Traces are meant for developers or support engineers trying to identify a problem. They should be as detailed as possible. There is no performance impact of having additional trace statements, since traces can be switched on or off based on severity. The goal: provide nearly the same amount of information what we can collect with debugging.
Example:
New change request cannot be created when there is one with status CR_CREATED, CR_INPROCESS, CR_ACCEPTED for the same sales order id and item id
Use com.sap.tc.logging.Location to emit trace messages.

2.    Implementation

1.3  Logging

In order to perform logging, the following steps should be followed:
1: Retrieve the Category Object for logging
private static final Category category = Category.getCategory(Category.APPLICATIONS, "<APPLICATION NAME>")
For convenience and to avoid unnecessary object creation, the following static variables are defined:
  • -          com.sap.tc.logging.Category.SYS_DATABASE
  • -          com.sap.tc.logging.Category.SYS_NETWORK
  • -          com.sap.tc.logging.Category.SYS_SERVER
  • -          com.sap.tc.logging.Category.SYS_SECURITY
  • -          com.sap.tc.logging.Category.APPLICATIONS
  • -          com.sap.tc.logging.Category.SYS_USER_INTERFACE
Security related logging should be performed using Category.APPS_COMMON_SECURITY
2: Log the message using the one of the available methods, according to the severity (e.g. infoT, debugT, warningT, errorT, fatalT)
category.infoT(location, method, "User {0} logged in", new Object[] { user });

1.4  Tracing

Proper tracing should be implemented as follows:
1. Create a reference to a location object for tracing like this
private static final Location location = Location.getLocation(<class name>.class);

String method = "<method name>";
Create a String variable which contains the method name in order to enable filtering for the logs of a given method in log viewer. This must be the first statement in every method, and second in constructors. Don’t set multiple times the method name in a given method. If there are multiple methods with the same name distinguish them using parameter list like this:
String method = "<method name> (String)";

String method = "<method name> (String, String)";
2. Output the trace message using the one of the available methods, according to the severity (e.g. infoT, debugT, pathT, warningT, errorT)
location.infoT(method, "User {0} started a new transaction", new Object[] { user });
Because of performance reasons for printing parameters an object array should be used. Elements of the array can be inserted to the text with the following way:
"This is an example text. Parameter 1 is {0}, the next parameter is {1}, the last parameter is {1}, the last parameter is {2}", new Object[] { variable1, variable2, variable3 }
3. Trace the entry and the exit of the method in nontrivial cases. For methods throwing exceptions place the exit in the finally{..} clause.
location.entering(method);
try {
    …
} finally{
    … location.exiting(method);
}
Entering exiting pair should be used for all methods which have main business role and for important utility methods as well. Methods with limited functionality, or methods with only 1 or 2 log entries can be skipped.


4. Exceptions should be traced using the traceThrowableT method of the Location class.
try {
    …
    <BUSINESS OPERATIONS>
    …
} catch (Exception e) {
    …
    location.traceThrowableT(Severity.ERROR, method, "Error occurred while calling business operation with parameter {0}", new Object[] { variable}, e);
    …
}

Usually in productive systems only Error level is enabled. That’s why the log message should contain the most important values which were processed when the error occurred.
If the current class cannot handle the exception and it thrown it forward; it should be logged in all level. If we filter for a given class in the Log Viewer application which is part of a call chain we won’t be able to see the exception if we don’t log it on every level.
If the given exception is related to configuration or availability of a system component/another system in the landscape instead of location it should be reported to category, because this problem can be solved by a system administrator.


5. Input Parameter Checks

If the given method has input parameter check we can report result in a warning message.
if ( inputParameter_OrderID > 5) {
    location.warningT(method, "Wrong input parameter Order Id: {0}", new Object[] {inputParameter_OrderId});

    return;

}

6. UI Navigation

If you navigate in the Web Dynpro UI layer between pages you can log it with path severity:
location.pathT(method, "Navigate to Clientselection component");
wdThis.wdFireEventNavigationEvent(EventSources.ES_CLIENTSELECTION, eventId);
   
It other layer/application/system is called from the code it should be logged on Path level (Webservice calls, JCO calls, etc.). This way the connections between layers and components can be investigated. If the called component is in the same software Component Info level should be used.
7. Business Code
The developers should decide what are the important steps in the code which are could be interesting mainly for developers and support engineers. Some cases the messages could be important for administrators as well.
location.infoT(method, "Get identTypeDropDownKey from context");

 

ISimpleTypeModifiable myType = wdThis.wdGetAPI().getContext().getModifiableTypeOf("identTypeDropDownKey");

 

IModifiableSimpleValueSet values = myType.getSVServices().getModifiableSimpleValueSet();
 
infoTfor the important values/process steps for developers. 
location.debugT(method, „Put value {0} into the list with key {1} ", new Object[] {e.getAttributeValue("value"), e.getAttributeValue("key")});

values.put((String)e.getAttributeValue("key"), (String)e.getAttributeValue("value"));
debugT for the detailed information which can help find the exact reason of a problem. If you would like to print exact value of a variable please use location.debugT. There is one exception: calling other layers. Please follow the security restrictions as well! For example don't insert highly confidential information into log and trace messages, for example passwords, PIN codes, etc. In ambiguous cases please contact the local security expert!

3.    Severity settings

Default severity settings should be:
-          INFO for categories (INFO and higher severity messages will be logged)
-          ERROR for locations (trace everything with severity ERROR or above)
Because of performance reasons ERROR level should be configured for categories as well in productive systems.

4.    Performance Considerations

If you have parameters, never build Strings yourself, always use the {n}-replacement:
location.infoT(method, "User {0} started transaction", new Object[] { user });

If you have to make calculations for traces, check first if the tracing is active

if(location.beLogged(<severity>/* e.g. WARNING */) ) {  // perform calculations and do logging/tracing
}
In this case this check should be used (of course errorT should be used instead of traceThrowableT):
locationCheckIsNeeded.png

5.    Common mistakes regarding Logging and Tracing

Using Error level instead of Info:
throwable.jpg
Printing the content of a given variable is not an error message. Debug level trace should be used
Using Debug level instead of Error:
debugLevel.jpg
Exceptions must be logged with ERROR log level. If an exception is part of normal business behavior (for example it indicates that a function call has no result) it shouldn’t log as an exception.
Incorrect log messages:
Biplab.jpg
How this log message helps to understand what happened in the code? A good log message should be understandable for everybody even without source code.
The log message should contain as many information about the working of the program as possible. In this case Error Code and Error Description could be useful. The timestamp is by default part of the log message.
traceThrowable.png
Unhandled exceptions:
unhandledException.jpg
Unhandled exceptions will be written to System.err stream and is spams the log file. Always use traceThrowableT method with Error severity
Using wrong method to print exception details in the log:
printStackTrace.png
Using print stack trace is forbidden, because it prints the stack trace into the log in barely readable format.
catching.jpg
The method “catching” will report the exception to Warning level, furthermore entering of own error message is not possible.
In both cases use location.traceThrowableT method instead.
Using traceThrowableT method instead of errorT
traceThrowable.png
If there is no exception error should be reported with errorT method.
Calling exiting method in the middle of the method
exiting.png
Entering-exiting pair should be called only once at the very beginning and end of the method. During issue investigation the investigator will think that the method has ended.
Logging closely related data into several log entry
multiple log entries.jpg
If there are multiple variable to log which are related they should be logged into one log entry. We will see the related data in one place and the logging will be faster because the reduced number of I/O operations.
location.infoT(method, „System Number Reset \n Sales Order ID: {0}\n EOSE Plan: {1}\n Quote: {2}\n Quote Line Item: {3}\n Sales Order Item: {4}\n SEN: {5}\n SSI: {6}", new Object[] {so.getID(), so.getPlan(), quoute.getName(), quote.getLineItem(), soItem.getId(), so.getSEN(), so.getSSI()});
Please note this is just an example code. Every value will be written into a new line so tit remains readable.
Method name abuse
methodNameAbuse.jpg
The method name should contain ONLY the method name, not the class name. In the Log Viewer application the location field will contain the class name and the method name. If the method name contains the class name as well this field won’t be readable anymore because it will be too long and contains the same data twice.
If there are more methods with the same name the parameter list can be added in order to distinguish between the methods.
Sensitive information in the log:
Printing sensitive personal data, like account balance or password is forbidden
Null pointer check is missing:
If a printed value is provided by a method of a class null pointer check for the class is mandatory.
Swallowed exception
try {
    …
} catch {
    //TODO write exception handling
}
In this case the exception won’t be visible in the log.

7.    Appendix

Official guide for Logging and Tracing

Use SourceMonitor to monitor your java code complexity

$
0
0

Today I found a useful free software called "SourceMonitor" which can help to calculate and monitor the java code ( and other programming language like C++, C# etc ) complexity.

 

clipboard18.png

 

For the definition and how to calculate cyclomatic complexity itself, please refer to detail in wikipedia.

 

In order to demonstrate the usage of this software, I use a very simple java class below for example:

 

package test;
import java.util.ArrayList;
public class monthTool {
 static ArrayList<String> monthCollection = new ArrayList<String>();
 public static void main(String[] args) {  monthTool tool = new monthTool();  tool.printV1(1);  tool.printV2(2);  tool.printV1(0);  tool.printV2(-1);  tool.printV3(3);  tool.printV3(13);
 }
 public monthTool(){  monthCollection.add("Invalid");  monthCollection.add("January");  monthCollection.add("Febrary");  monthCollection.add("March");  monthCollection.add("April");  monthCollection.add("May");  monthCollection.add("June");  monthCollection.add("July");  monthCollection.add("August");  monthCollection.add("September");  monthCollection.add("October");  monthCollection.add("November");  monthCollection.add("December");
 }
 public void printV1(int month){  System.out.println("Month is: " + getMonthNameV1(month));
 }
 public void printV2(int month){  if( month >= 1 && month <= 12)   System.out.println("Month is: " + getMonthNameV2(month));  else   System.out.println("Please specify a valid month");
 }
 public void printV3(int month) {  System.out.println("Month is: " + getMonthNameV3(month));
 }
 public String getMonthNameV2(int month){  if( month == 1)   return "January";  else if( month == 2)   return "Febrary";  else if( month == 3)   return "March";  else if( month == 4)   return "April";  else if( month == 5)   return "May";  else if( month == 6)   return "June";  else if( month == 7)   return "July";  else if( month == 8)   return "August";  else if( month == 9)   return "September";  else if( month == 10)   return "October";  else if( month == 11)   return "November";  else if( month == 12)   return "December";  else   return "Invalid";
 }
 public String getMonthNameV1(int month){  switch (month){  case 1:   return "January";  case 2:   return "Febrary";  case 3:   return "March";  case 4:   return "April";  case 5:   return "May";  case 6:   return "June";  case 7:   return "July";  case 8:   return "August";  case 9:   return "September";  case 10:   return "October";  case 11:   return "November";  case 12:   return "December";  default:   return "Invalid";  }
 }
 public String getMonthNameV3(int month){  try {   return monthCollection.get(month);  }  catch (java.lang.IndexOutOfBoundsException e){   return "Invalid";  }
 }
}

It has three different ways to convert an integer into a month name if possible, or else the string "Invalid" is returned.

 

1. Create a new project:

clipboard1.png


Here you could find out all suppored programming language:


clipboard2.png

2. specify a project name and locate the directory of SourceMonitor project file. For me I choose to store it into the same path of my test java project.

clipboard3.png

3. Specify which source files will be scaned by SourceMonitor:

clipboard4.png

4. For the left steps in wizard, just use default settings and finish wizard. Click OK button to start scan.

clipboard5.png

And soon we get the analysis result. Since we are more interested with the detail of each method, we choose "Display Method Metrics" from context menu.

clipboard6.png

From the result list it is easily known that the third approach of month name retrieval is much better than the first two ones, no matter regarding complexity or statement number.

clipboard7.png

You could also view the result list via "Chart Method Metrics" from context menu:

clipboard8.png

Take the complex graph for example: X axis is the complexity value of each method, the Y axis is the total occurance of each different complex value.


clipboard9.png

Step by Step to use VisualVM to do performance measurement

$
0
0

Recently I am trying to find a handy tool to measure the performance of my Java application and finally I think the VisualVM provided by JDK is the ideal one. This blog is written based on JDK1.7 + Eclipse 4.3.2.

 

What is VisualVM

 

It is a tool automatically available after JDK is installed. The executable file could be found on your <JDK installation folder>/bin as displayed below.

 

clipboard1.png

In order to measure the performance of your application, it is necessary for the application to be recognized by VisualVM first.

There is a plugin named "VisualVM launcher for Eclipse" which can help us about it.

 

Install and configure VisualVM launcher

 

1. download the zip from http://visualvm.java.net/eclipse-launcher.html. Unzip the file and put it to the plugin folder of your Eclipse installation folder. In my laptop it looks like below. There should be a site.xml inside the unzipped folder.

clipboard2.png

2. In Eclipse, choose menu "Help->Install New Software", click "Local", add locate the folder to the one you finish in step1.

clipboard3.png

Then the local downloaded plugin is successfully parsed and ready for install.

clipboard4.png

finish the installation.


clipboard5.png

3. Restart Eclipse, then you can find a new option via the path below. Configure two paths accordingly.

clipboard6.png

For "JDK Home", if you configure the JRE path by mistake, later when you try to measure your application, the VisualVM will fail to load with the following error message:

clipboard7.png

Now the plugin is ready to use.

 

Do performance measurement

 

1. Select your Java project, choose context menu "Run as"->"Run configuration", create a new Application configuration by specifying VisualVM launcher as its launcher, instead of the default Eclipse JDT launcher.

clipboard8.png

2. For example I have a Java application which sorts an array by QuickSort algorithm written by myself and I would like to get its performance data, then I set a breakpoint on line 57, before my main logic is executed.

 

Then launch the application in debugging mode with the application configuration created in previous step.

 

Afterwards VisualVM will automatically be launched and successfully recognize the execution of my application.

 

Click Profiler tab:

clipboard9.png

Current status: profiling inactive. Click CPU button:

clipboard10.png


Now profiling is activated:

clipboard11.png

3. Go back to Eclipse and click F8 to finish execution. Once finished, VisualVM will immediately capture this event and notify you. Just click Yes to get performance result.

clipboard12.png

The result is displayed as below:

clipboard13.png

Using Optional in Java 8

$
0
0

Optional class is available in Java8. Let's use some examples to understand its logic.

 

Example 1

 

Optional instance acts as a wrapper for your productive Java class. For example I have a Person class:

 

class Person {  private String mName;  public Person(String name) {   mName = name;  }  public void greet() {   System.out.println("I am: " + mName);  }
}

line 21 will trigger an null pointer exception.

clipboard1.png

Console will print out "Error: java.lang.NullPointerException - null". To avoid it we can use method ofNullable and evaluate the availability using method isPresent. The following two lines 28 and 32 will print false and true accordingly.

clipboard2.png

Example 2

 

Instead of writing code like:

if ( person != null ) {     person.greet();
}

to avoid null pointer exception before Java 8, now we can write:

clipboard3.png

The lambda expression specified within method ifPresent will only be executed if the method isPresent() returns true.

 

Example 3

 

Now we can avoid the ( condition ) ? x: y statement using orElse method of Optional:

clipboard4.png

clipboard5.png

Example 4

 

We need to figure out if a person is Jerry or not. The old style has to always check whether the reference jerry is available or not.

clipboard6.png

The new approach is to use filter method of instance of class Optional<Person>, thus the null check could be avoided (replaced by ifPresent ).

 

old style-> Jerry found: Jerry
new style-> Jerry found: Jerry

 

The enhanced version using map function:

clipboard7.png

Difference between method map and flatMap

 

map accepts a function returning the unwrapped data without Optional ( type U ), and the flatMap accepts a function returning the wrapped data with Optional ( type Optional<U> ).

clipboard8.png

An example of Java Remote debug

$
0
0

1. start Jetty server under debug mode via mvn jetty:run

clipboard1.png

2. In Eclipse, create a new Debug configuration->Remote Java Application

clipboard2.png

Specify Host as localhost and port 8000:

clipboard3.png

Click debug button:

clipboard4.png

You should observe that the Jetty server listening to port 8000 has accepted this debug request and start application via localhost:8080 now:

clipboard5.png

3. go to localhost:8080, perform action to trigger the breakpoint:

clipboard6.png

And now in Eclipse, breakpoint is triggered!

clipboard7.png

Note: if you meet with error message "bind error, address already in use", please first use "netstat -lp" to find the process id which occupies the address and kill that process by kill -9 <pid>.

clipboard8.png

clipboard9.png

If still does not work, restart the virtual machine instance.


An example of building Java project using Maven

$
0
0

Prerequisite: download and configure Maven in your laptop. Once done, type "mvn" in command line and you should observe the following output by Maven:

clipboard1.png

Suppose I have a simple Java project with following package hierarchy:


clipboard2.png

The source code of MavenSandboxTest is also very simple:

 

package test.java.com.sap;
import static org.junit.Assert.assertEquals;
import main.java.com.sap.MavenSandbox;
import org.junit.Test;
public class MavenSandboxTest {
@Test 
public void test() {  MavenSandbox tool = new MavenSandbox();  assertEquals(tool.hello(),"Hello world");
}
}

How to build this simple Java project using Maven?

 

Create a pom.xml under the root folder of your project,

clipboard3.png

and paste the following source code:

 

<?xml version="1.0" encoding="UTF-8"?>  <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd">     <modelVersion>4.0.0</modelVersion>     <groupId>com.sap.MavenSandbox</groupId>     <artifactId>MavenSandbox</artifactId>     <version>0.0.1-SNAPSHOT</version>     <dependencies>            <dependency>                 <groupId>junit</groupId>                 <artifactId>junit</artifactId>                 <version>4.10</version>                 <scope>test</scope>            </dependency>     </dependencies>  </project> 

 

Create a new Configuration:

clipboard4.png


Specify the following settings and click run:

clipboard5.png

If everything goes well, you should see the Build Success message:

clipboard6.png

and there will be a new folder "target" generated automatically:

clipboard7.png

go to folder classes, and you can execute the compiled java class via command "java main.java.com.sap.MavenSandbox":

clipboard8.png

or you can also directly execute the jar file via the command below ( you should first navigate back to target folder )

clipboard9.png

since we have specified the dependency of JUnit with version 4.10 in pom.xml:

clipboard10.png

so when "mvn clean install" is executed, you can observe the corresponding jar file is automatically downloaded by Maven:

clipboard11.png

Finally you could find the downloaded jar file from this folder:

clipboard12.png

Build a Spring Hello world application using Maven

$
0
0

Based on the learning from An example of building Java project using Mavenwe can now use Maven to do more practical task.


I plan to create a Hello World application based on Spring framework. Instead of manually downloading Spring framework jar files and configured the usage of those jars into my Java project, I can now leverage Maven to make the whole process run automatically.

 

Install m2e - Maven integration plugin for Eclipse:

clipboard1.png

And then create a new Java project, and you can easily convert this project to a Maven project via context menu:

clipboard2.png

Once converted, you can then declare the dependency by clicking pom.xml and choose "Maven->Add Dependency" from context menu:

clipboard3.png

Enter group id, artifact id and version accordingly. Once done, the XML content should look as below:

 

<?xml version="1.0" encoding="UTF-8"?>  <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd">     <modelVersion>4.0.0</modelVersion>     <groupId>com.sap.MavenSandbox</groupId>     <artifactId>MavenSandbox</artifactId>     <version>0.0.1-SNAPSHOT</version>     <dependencies>            <dependency>                 <groupId>junit</groupId>                 <artifactId>junit</artifactId>                 <version>4.10</version>                 <scope>test</scope>            </dependency>            <dependency>           <groupId>org.springframework</groupId>           <artifactId>spring-context</artifactId>           <version>4.2.6.RELEASE</version>           </dependency>   </dependencies>  </project>

 

Trigger a build via mvn clean install, and Maven will automatically download the necessary jars of Spring framework and store them to .m2 folder.

clipboard4.png

Now we can start programming in Spring.

My project has the following hierarchy:

clipboard5.png

All missing import could be correctly parsed and easily fixed now.

clipboard6.png

If you would like to do some debugging on Spring framework source code, you can also download the related source code very easily via "Maven->Download Sources".

clipboard7.png

After that you could just set a breakpoint on the constructor of HelloWorld class and then study how Spring instantiates the instance of this class configured in beans.xml via reflection:

clipboard8.png

The source code of files used in the project

 

HelloWorld.java

 

package main.java.com.sap;
public class HelloWorld {    private String message;    public void setMessage(String message){       this.message  = message;    }    public HelloWorld(){     System.out.println("in constructor");    }      public void getMessage(){       System.out.println("Your Message : " + message);    }
}

 

MavenSandbox.java

 

package main.java.com.sap;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class MavenSandbox {
public static void main(String[] args) {  ApplicationContext context = new ClassPathXmlApplicationContext("Beans.xml");  HelloWorld obj = (HelloWorld) context.getBean("helloWorld");     obj.getMessage();
}
public String hello(){  return "Hello world";
}
}

 

MavenSandboxTest.java

 

package test.java.com.sap;
import static org.junit.Assert.assertEquals;
import main.java.com.sap.MavenSandbox;
import org.junit.Test;
public class MavenSandboxTest {
@Test 
public void test() {  MavenSandbox tool = new MavenSandbox();  assertEquals(tool.hello(),"Hello world");
}
}

 

beans.xml

<?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.springframework.org/schema/beans"    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"    xsi:schemaLocation="http://www.springframework.org/schema/beans    http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">   <bean id="helloWorld" class="main.java.com.sap.HelloWorld">       <property name="message" value="Hello World!"/>   </bean></beans>

An example of Deadlock detection using JDK standard tool: jstack

$
0
0

We can get the concept of deadlock inwikipedia.

The picture below gives a common scenario which leads to deadlock.

http://scn.sap.com/servlet/JiveServlet/downloadImage/38-144258-983599/620-365/1.png

In this blog, I will share how to detect deadlock situation using JDK standard tool jstack.

 

First we have to write a Java program which will lead to Deadlock:

package thread;

 

public class DeadLockExample {

 

/*

  * Thread 1: locked resource 1

    Thread 2: locked resource 2

  */

public static void main(String[] args) {

  final String resource1 = "ABAP";

  final String resource2 = "Java";

  // t1 tries to lock resource1 then resource2

  Thread t1 = new Thread() {

   public void run() {

    synchronized (resource1) {

     System.out.println("Thread 1: locked resource 1");

     try {

      Thread.sleep(100);

     } catch (Exception e) {

     }

     synchronized (resource2) {

      System.out.println("Thread 1: locked resource 2");

     }

    }

   }

  };

 

  Thread t2 = new Thread() {

   public void run() {

    synchronized (resource2) {

     System.out.println("Thread 2: locked resource 2");

 

     try {

      Thread.sleep(100);

     } catch (Exception e) {

     }

     synchronized (resource1) {

      System.out.println("Thread 2: locked resource 1");

     }

    }

   }

  };

  t1.start();

  t2.start();

}

}

Execute this program, you will get output:

 

 

Thread 1: locked resource 1

 

Thread 2: locked resource 2

 

Then use command jps -l -m to list the process id of this deadlock program. In my example it is 51476:

clipboard1.png

Just type jstack + process id, and it will display all detailed information about deadlock:

clipboard2.png

Here the object 0x00000000d6f64988 and 0x00000000d6f649b8 represent the two resource String "ABAP" and "Java".

clipboard3.png

Demo on Sending Data to the Graylog by using GELF and get the Logging data on Graylog console

$
0
0

Sending Data to the Graylog by using GELF and get the Logging data on Graylog console:


As Graylog principle that it get the Logging information by sending the data from our application layer.


1.Create the Input in the Graylog and Create the Content pack


2.Export/Download the content pack


3.Upload the Content Pack


4.Configure the GELF library for Logback library


5.Configure the logback.xml file


6.Run the application


7.Check the logging data in the Graylog Console.


1.Create the Input in the Graylog and Create the Content pack:


Configure the input in the Graylog for GELF TCP:


1.Select GELF TCP

 

pastedImage_0.jpg

2.Click on the “Launch new input” button and enter the required details as like below screen,

 

pastedImage_1.jpg

3.Click on the “Launch” button.

 

then you should see the Gelfjava(GELF TCP) input appear on the Graylog console.


   
  pastedImage_2.jpg

2.Export/Download the content pack:

 

Content pack:Content packs are bundles of Graylog input, extractor, stream, dashboard, and output configurations that can provide full support for a data source. Content packs are available in the Graylog the marketplace , so required Content Packs can be imported using the Graylog web interface.

 

Go to System-> Select Content Packs->Click on Create a content pack button.


 
  pastedImage_3.jpg

Then page will be navigated to the “Create a content pack” page and fill the required fields.


  
 
  pastedImage_4.jpg

Then click on the “Download my content pack” button which locates at the same page I.e Create a content pack page. So one content-pack.json file will be downloaded.


 
  pastedImage_5.jpg

 

Later downloaded the file “content-pack.json” and save at a system drive.

 

Then go back to the Content Packs, click on the button “Import content pack”

 

pastedImage_6.jpg

3.Upload the Content Pack:

 

Click on the Choose File button and select content_pack.json file from system and “click” upload button.

 

pastedImage_7.jpg


  Later created content pack is located in the same Content Packs page with Category name(here Operating Systems).

Click on this Category name(here Operating Systems) , then Content pack name (here logback-Gelf) will be appeared which is created by us. Select the Radio Button->click on the Apply content button.

 

pastedImage_8.jpg

  Then will get message on top of the page like “Success! Bundle applied successfully

 

pastedImage_9.jpg

 

4.Configure the GELF library for Logback library:

 

GELF / Sending from applications:

The Graylog Extended Log Format (GELF) is a log format that avoids the shortcomings of classic plain syslog and is perfect to logging from your application layer. It comes with optional compression, chunking and most importantly a clearly defined structure. There are dozens of GELF libraries for many frameworks and programming languages to get you started.

 

Here I chosen logback-gelflibrary .


  Setup with our application:

 

Add dependency in the POM.xml file of MAVEN,

   <dependency>

<groupId>me.moocar</groupId>

<artifactId>logback-gelf</artifactId>

<version>0.3</version>

</dependency>

 

5.Configure the logback.xml file :

 

Add the logback.xml file in the application.

 

Configurations in the logback.xml,

 

  1. Add the RemoteHost
  2. Add the Port Number
  3. Add the Host


  <?xmlversion="1.0"encoding="UTF-8"?>

 

<configuration>

<!--Use TCP instead of UDP-->

<appendername="GELF TCP APPENDER"class="me.moocar.logback.net.SocketEncoderAppender">

<remoteHost>000.00.00.00</remoteHost>

<port>12201</port>

<encoderclass="ch.qos.logback.core.encoder.LayoutWrappingEncoder">

<layoutclass="me.moocar.logbackgelf.GelfLayout">

<!--An example of overwriting the short message pattern-->

<shortMessageLayoutclass="ch.qos.logback.classic.PatternLayout">

<pattern>%ex{short}%.100m</pattern>

</shortMessageLayout>

<!-- Use HTML output of the full message. Yes, any layout can be used (please don't actually do this)-->

<fullMessageLayoutclass="ch.qos.logback.classic.html.HTMLLayout">

  <pattern>%relative%thread%mdc%level%logger%msg</pattern>

</fullMessageLayout>

<useLoggerName>true</useLoggerName>

<useThreadName>true</useThreadName>

<useMarker>true</useMarker>

 

<host>000.00.00.00</host>

  <additionalField>ipAddress:_ip_address</additionalField>

  <additionalField>requestId:_request_id</additionalField>

<includeFullMDC>true</includeFullMDC>

<fieldType>requestId:long</fieldType>

<!--Facility is not officially supported in GELF anymore, but you can use staticFields to do the same thing-->

<staticFieldclass="me.moocar.logbackgelf.Field">

<key>_facility</key>

<value>Gelfjava</value>

</staticField>

 

</layout>

</encoder>

</appender>

 

<rootlevel="debug">

<appender-refref="GELF TCP APPENDER"/>

</root>

</configuration>


 
  6.Run the application:

 

Run the application.

 

pastedImage_10.jpg

Then Go to browser and refresh the Graylog URL and click System->Inputs and then you should see the below screen.

 

pastedImage_11.jpg


  Note: As per above screen you can find round red color at top center, it is because of that if already Gelfjava(GELF TCP) is available then Graylog server says that the particular connection is available. For this link graylog server made as failed connection.

So as a Graylog administrator can able to delete failed connection.

 

7.Check the logging data in the Graylog Console:

 

Then you can Click on the “Show Received Messages”,later you we can see the collection of/bundles of log messages as below screen.

 

pastedImage_12.jpg

 

Some times will get “Nothing Found” instead of above screen,

 

 

 

 










then we have to set the Port numbers in the System Network of Remote Host.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Refer Links:1. Overview on the Graylog

                  2.Installation Steps of Graylog-Part1

                  3.Installation Steps of Graylog-Part2

                  4.Demo On Configuring Graylog input and get messages

Overview on the Graylog

$
0
0

Introduction:

Graylog is a fully integrated open source log management platform for collecting, indexing, and analyzing both structured and unstructured data from almost any source.

Overview:

If you need to make an analysis of logs, note that there is an open source tool called Graylog which can collect, index and analyze structured and unstructured data from various sources.

          1. Started by Lennart Koopmann in his free time in 2010 (Graylog2 at that time)

          2. TORCH GmbH founded as company behind Graylog in late 2012

          3. Big rewrite that got released as 0.20 in Feb 2014

          4. New US based company Graylog, Inc. founded in Jan 2015

          5. Renamed from Graylog2 to Graylog

          6. Graylog 1.0 release in Feb 2015


Management tools:

Configuration management tools allow us to manage our computing resources in an effective and consistent way .

They make it easy to run hundreds or thousands of machines without having to manually execute the same tasks over and over again.

By using shared modules/cookbooks it is pretty easy to end up with hundreds of managed resources like files, packages and services per node .

Nodes can be configured to check for updates and to apply new changes automatically.

This helps us to roll out changes to lots of nodes very easily but also makes it possible to quickly break our infrastructure resulting in outages.

So being able to collect, analyze, and monitor all events that happen sounds like Graylog.

 

 

Levels of Log Management:

Log management can able to grep maximum data over flat files, it is stored on its host computer system as an ordinary "flat file".

To access the structure of the data and manipulate it.

Log management can be done on different levels:

Level1: Do not collect logs at all.

Level2: Collect logs. Mostly simple log files from email or HTTP servers.

Level3: Use the logs for forensics and troubleshooting. And also why email not sent out? Why was that HTTP 500 thrown?

Level4: Save searches. The most basic case would be to save a grep command you used.

Level5: share searches. Store that search command somewhere so co-workers can find and use it to solve similar problems.

Level6: Reporting. Easily generate reports from your logs. How many exceptions did we have this week, how many past weeks. People can use Charts, PDF’s.

Level7: Alerting. Automate some of your troubleshooting tasks. Be warned automatically instead of waiting for a user to complain.

Level8: Collect more logs. We may need more log sources for some use cases. Firewalls logs, Router logs, even physical access log.

Level9: Correlation. Manual analysis of all that new data may take too long. Correlate different sources.

Level10: Visual analysis, Pattern detection, interaction visualization, dynamic queries, anomaly detection, sharing and more sharing.


Then we need a central placed to send your logs, for this introduced

Graylog (2)-server

Then we need a central placed to make use of those logs, for this introduced

Graylog (2)-web-interface.


How to send logs:

Classic syslog via TCP/UDP

GELF via TDP/UDP

Both via AMQP or write your own input plugins.

GELF: Graylog Extended Log Format-Lets you structure your logs.

Many libraries for different systems and languages available.


  Eg:

{

‘short_message’: ’Something went wrong’,

‘host’:’some-host-l.example.org’,

‘severity’:2,

‘facility’:’ some subsystem’,

‘full_message’: ’stacktrace and stuff’,

‘file’: ’some controller.rb’,

‘line’:7,

‘_from_load_balancer’:’lb-3’,

‘_user_id’:900 l,

‘_http_response_code’:500

}


Log messages types:

There are 2 types of log messages.

Type1: Automatically generated from a service. Usually huge amount of structured but raw data. You have only limited control about what is logged.

Type2: Logs directly sent from within your applications. Triggered for example by a log.error() call or an exception catcher. Possible to send highly structured via GELF.

Architecture:


                  graylog.JPG

 

As presented in the above Graylog architecture, it is depending on components.

                   1. ElasticSearch

                   2. MongoDB

                   3. Graylog

1. ElasticSearch: ElasticSearch is useful for storing logs and searching text.

2. MongoDB: MongoDB is useful for Metadata Management.

3. Graylog: Graylog can help you to better understand the use made within your applications, improve their security, and reduce costs.

Architectural Considerations:

There are a few rules of thumb when scaling resources for Graylog:

1. graylog-server nodes should have a focus on CPU power.

2. Elasticsearch nodes should have as much RAM as possible and the fastest disks you can get. Everything depends on I/O speed here.

3. MongoDB is only being used to store configuration and the dead letter messages, and can be sized fairly small.

4. graylog-web-interface nodes are mostly waiting for HTTP answers of the rest of the system and can also be rather small.

5. graylog-radio nodes act as workers. They don’t know each other and you can shut them down at any point in time without changing the cluster state at all.

Also keep in mind that messages are only stored in Elasticsearch. If you have data loss on Elasticsearch, the messages are gone - except if you have created backups of the indices.

MongoDB is only storing meta information and will be abstracted with a general database layer in future versions. This will allow you to use other databases like MySQL instead.

 

Minimum Setup:

This is a minimum Graylog setup that can be used for smaller, non-critical, or test setups. None of the components is redundant but it is easy and quick to setup.

                              graylog1.JPG

 

Bigger Production Setup:

This is a setup for bigger production environments. It has several graylog-server nodes behind a load balancer that share the processing load. The load balancer can ping the graylog-server nodes via REST/HTTP to check if they are alive and take dead nodes out of the cluster.

 

                        extended_setup.png

 

Refer Links:  1.Installation Steps of Graylog-Part1

                    2.Installation Steps of Graylog-Part2

                    3.Demo On Configuring Graylog input and get messages    

                    4.Demo on Sending Data to the Graylog by using GELF and get the Logging data on Graylog console

Viewing all 50 articles
Browse latest View live