Quantcast
Channel: SCN : Blog List - Java Development
Viewing all 50 articles
Browse latest View live

Installation Steps of Graylog-Part1

$
0
0

Graylog Installation:

Modern server architectures and configurations are managed in many different ways. Some people still put new software somewhere in opt manually for each server while others have already jumped on the configuration management train and fully automated reproducible setups.

Graylog can be installed in many different ways so you can pick whatever works best for you. We recommend to start with the virtual machine appliances for the fastest way to get started and then pick one of the other, more flexible installation methods to build an easier to scale setup. (Note: The virtual machine appliances are suitable for production usage because they are also prepared to scale out to some level when required.)

The Graylog web interface has the following prerequisites:

 

  1. Some modern Linux distribution (Debian Linux, Ubuntu Linux, or CentOS recommended)
  2. Oracle Java SE 7 or later (Oracle Java SE 8 is supported, OpenJDK 7 and OpenJDK 8 also work; latest point release is recommended)

 

Components:

 

              1. MongoDB

              2. ElasticSearch

              3. Graylog

              4. Graylog Web Interface

 

Installation Steps:


Installing Java:


    1. ElasticSearch works based on Java, so we can install OpendJDK.

      To install OpenJDK Use command like,

 

       [root@localhost ~]# yum install java


    1.png

 

       To verify Java version ,use command like,

 

    2.jpg

 

Installing EPEL :

 

Configure EPEL repository on CENTOS 7/ RHEL 7:

 

This explains that how to enable EPEL (Extra Packages for Enterprise Linux) on newly released CentOS 7 / RHEL 7, it is maintained by a special interest group from Fedora that creates, maintains and manage high quality of additional packages for Enterprise Linux Variants which includes Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Enterprise Linux(OEL).

Install EPEL repository:

Install EPEL rpm by using the following command like,

 

  3.png

 

Output will look like,

 

   4.png

 

List the installed repo’s:

You can find the EPEL repo in the list.

 

   5.png

 

Output will look like,

 

   6.png

 

EPEL packages:

 

    7.png

 

Packages list will look like,

 

    8.png

 

Install the package:

 

     9.png

 

Install ElasticSearch:

 

Elasticsearch is an open source search server, it offers a realtime distributed search and analytics with RESTful web interface. Elasticsearch stores all the logs sent by the Graylog server and displays the messages when the graylog web interface requests for full filling user request over the web interface.

 

Import the GPG key:

 

     10.png

 

Add ElasticSearch repository,

 

   11.png

 

Install the ElasticSearch by using command like,

 

   12.png

 

Configure Elasticseach to start during system startup.

 

   13.png

 

The only important thing is to set a cluster name as “graylog2“, that is being used by graylog. Now edit the configuration file of Elasticsearch.

 

   14.png

 

Disable dynamic scripts to avoid remote execution, that can be done by adding the following line at the end of above file.

 

   15.png

 

Once it is done, we are good to go. Before that, restart the ElasticSearch services to load the modified configuration.

 

   16.png

 

Wait at least a minute to let the Elasticsearch get fully restarted, otherwise testing will fail. Elastisearch should be now listen on 9200 for processing HTTP request, we can use CURL to get the response. Ensure

that it returns with cluster name as “graylog2”

 

    17.png

 

 

Optional: Use the following command to check the Elasticsearch cluster health, you must get a cluster status as “green” for graylog to work.

 

   18.png

 

Install MongoDB:


  MongoDB is available in RPM format and same can be downloaded from the official website. Add the following repository information on the system to install MongoDB using yum.

 

   19.png

 

Install MongoDB by using command like,

 

  20.png

 

If you use SELinux, you must install below package to configure certain elements of SELinux policy.

 

  21.png

 

Run the following command to configure SELinux to allow MongoDB to start.

 

  22.png

 

Start the MongoDB service and enable it to start automatically during the system start-up.

 

  23.png

 

Install Graylog:


  Graylog-server accepts and process the log messages, also spawns the RESTAPI for the requests that comes from graylog-web-interface. Download the latest version of graylog from graylog.org.


  Install Graylog repository by using command like,

 

   24.png

 

Install the latest graylog server by using command like,

 

   25.png

 

Edit the server.conf file.

 

   26.png

 

Configure the following variables in the above file.

Set a secret to secure the user passwords, use the following command to generate a secret, use at least

64 character’s.

 

  27.png

 

Note: Do not forget to configure EPEL repository on CentOS 7 / RHEL 7. As explained above.

If you get a “pwgen: command not found“, use the following command to install pwgen.

 

  28.png

 

Place the secret.

 

   29.png

 

Next is to set a hash password for the root user (not to be confused with system user, root user of graylog is admin). You will use this password for login into the web interface, admin’s password can not be changed using web interface, must edit this variable to set.

 

   30.jpg

 

continue... in link Installation Steps of Graylog-Part2


Installation Steps of Graylog-Part2

$
0
0

Refer link: Installation Steps of Graylog-Part1

Place the hash password.

 

      30.jpg

 

Do the following changes in the file like server.conf

 

      32.jpg

 

      33.png

 

      34.png

                   

       35.png

 

       36.png

 

       37.png

 

       38.jpg

 

       39.png

 

 

Start the graylog server using the following command.

 

         40.png

 

You can check out the server startup logs, it will be useful for you to troubleshoot graylog in case of any issue.

 

         41.png

 

         42.jpg

 

 

Install Graylog Web Interface:

 

To configure graylog-web-interface, you must have at least one graylog-server node. Install web interface using below command.

 

             43.png

 

Edit the configuration file and set the following parameters.

 

             44.png

 

This is the list of graylog-server nodes, you can add multiple nodes, separate by commas.

 

             45.png

 

Set the application secret, you can generate it using pwgen -N 1 -s 96.

 

             46.jpg

 

Restart the gralog-web-interface using following command,

 

             47.png

 

Access Graylog Web Interface:

 

The web interface will listen on port 9000, configure the firewall to allow traffic on port 9000.

 

            48.png

 

 

            49.png

 

Point your browser to http://ip-add-ress:9000. Log in with username “admin” and the password you configured at root_password_sha2 on server.conf.

 

Point browser for local to http://127.0.0.1:9000

 

Point browser for global/remote to http://your_remote_ip:9000

 

We can launch the Graylog welcome page.

 

 

1.Launch the Graylog using local URL,

 

http://127.0.0.1:9000

 

so, we will be launched on Graylog page as below view,

 

50.png

 

 

2.Launch the Graylog using local URL,

 

http://your_remote_ip:9000

 

so, we will be launched on Graylog page as below view,

 

 

51.jpg

 

 

 

Sign In the Graylog by using login credentials,

 

Username : admin

Password :initial123(where we already created for hash code)

 

 

52.jpg

 

 

If you login success , then we can see the Graylog console page with “Nothing Found” message.

 

53.jpg

 

If you login failed then you can find view like below,

 

 

54.jpg

Demo On Configuring Graylog input and get messages

$
0
0

Demo On Configuring Graylog input and get messages:


  By using Graylog we can get the whole information(logging information,indexing, collecting information).

 

If we are not sending any data (application data, JSON data ...etc ) to the Graylog, then we need to configure the input , then this input will tell to Graylog to accept the log messages.

 

Configuring the Graylog input:

 

1.Launch the Graylog home page by using URLs.

 


 
2.Enter the valid Username and Password, then the page will be navigated to the Graylog console page.


 
 
 

3.The Graylog console page

Note: When we first time launch the Graylog Console , graylog didn't show the Histogram,Messages.

 



    4.To configure the input in Graylog, click System->Inputs

 

   Note: First time when we launch the Graylog there is no Global inputs and Local inputs exist.

 


 

5.Then Select the Syslog UDP and click on  Launch New Input button.

 


 

6.Give Title(Ex:Demo Syslog UDP), Bind Address(Local IP Address or Remote IP Address ) and Port as 5140, then click on the Launch button which is located at bottom right of the “Launch new input” pop-up.

 


 
  7.Check If you have Messages:

 

then you should see the Syslog UDP input appear on the Graylog console.

 


 

8.Click on the Show received messages button, then you should find the below screen.

 


That's it for this demo on Configuring Graylog input and get messages.

Compare two Business Entities using JPA/Reflection in Java

$
0
0

In this tutorial, java development team experts will share the way to make comparison between two business entities using JPA/ Reflection. You can read the story here and discover how they do it.


Use case:

Let’s say you want to compare two objects of course same type as you can’t compare an Object type with an object type Integer because an Integer can never be String. If the object contains only few attributes we can compare them one by one but if the object contains 20 attributes, 30 attributes, 40 attributes, hundred attributes? Of course in a real time project enterprise application we can have an entity with hundreds of attributes. And out of all these attributes many of them are transiting attributes as some of them are persistent attributes but we have to compare only transient attributes.


How to do it?

Can you write hundred if else conditions and compare each of them? Then this is not at all good programming, if you come up with this solution your team lead will never accept this solution.


Then there comes the solution Java Reflection.

 

Let’s say you have an Employee entities like below:


Code snippet to create table:


EmployeeBE:

import java.io.Serializable;

 

import javax.persistence.Column;

import javax.persistence.Entity;

import javax.persistence.FetchType;

import javax.persistence.GeneratedValue;

import javax.persistence.GenerationType;

import javax.persistence.Id;

import javax.persistence.JoinColumn;

import javax.persistence.ManyToOne;

import javax.persistence.Table;

import javax.persistence.Transient;

 

//// Named queries

 

@Table(schema = "EMP", name = "T_EMPLOYEE")

@javax.persistence.SequenceGenerator(name = "EmployeeBE", allocationSize = 1, initialValue = 1, sequenceName = "SEQ_EMPLOYEE")

@Entity

publicclass EmployeeBE implements Serializable {

 

/**

* the serialVersionUID

*/

privatestaticfinallongserialVersionUID = 6058067959150204025L;

 

 

@Id

@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "EmployeeBE")

private Integer id;

 

@Column(name = "FIRS_NAME")

private String firstName;

 

@Column(name = "LAST_NAME")

private String lastName;

 

@Column(name = "ADDRESS")

private String address;

 

@Column(name = "EMAIL")

private String email;

 

@Column(name = "MOBILE_NUMBER")

private String mobileNumber;

 

@Column(name = "MANAGER")

privatebooleanisManager;

 

@ManyToOne(fetch = FetchType.LAZY)

@JoinColumn(name = "MANAGER_ID")

privateMangerBEmanagerBE;

 

@Column(name = "ROLE")

private String role;

 

@Column(name = "SALARY")

private Double salary;

 

@Column(name = "MARRIED_STATUS")

privatebooleanmarriedStatus;

 

@Transient

privatebooleanemployeeType;

 

/// many other fields

 

publicvoid setFirstName(String firstName) {

  1. this.firstName = firstName;

}publicvoid setLastName(String lastName) {

  1. this.lastName = lastName;

}publicvoid setId(Integer id) {

  1. this.id = id;

}publicvoid setAddress(String address) {

  1. this.address = address;

}publicvoid setManager(booleanpisManager) {

  1. this.isManager = pisManager;

}publicvoid setRole(String role) {

  1. this.role = role;

}publicvoid setSalary(Double salary) {

  1. this.salary = salary;

}publicvoid setEmployeeType(booleanemployeeType) {

  1. this.employeeType = employeeType;

}

 

}



If you observe the above Entity I added many attributes, many of them are in persistent state and some of them are transient state.


Now there is a need to check whether all the persistent attributes are equal or not.


I wrote one helper class, which will have the methods to compare the attributes by reflection.


import java.lang.reflect.Field;

import java.lang.reflect.Method;

import java.util.ArrayList;

import java.util.Arrays;

import java.util.Collection;

 

import javax.persistence.Column;

import javax.persistence.Id;

import javax.persistence.Version;

 

publicclass AttributeCompareHelper {

 

private AttributeCompareHelper() {

}

 

publicstaticboolean areObjectsEqual(EmployeeBE first, EmployeeBE second, Collection<String>toBeExcludeAttributes) {

booleanretVal = first == second;

if (first != null&&second != null&&first.getClass().equals(second.getClass())) {

final Field[] fields = first.getSupportedFields();

for (final Field field : fields) {

String fieldName = field.getName();

if (field.isAnnotationPresent(Column.class) && !field.isAnnotationPresent(Id.class)

&& !toBeExcludeAttributes.contains(fieldName)) {

final Object value1 = getValue(first, field);

final Object value2 = getValue(second, field);

if (!compareValues(value1, value2)) {

returnfalse;

          }

}

}

retVal = true;

}

returnretVal;

}

 

publicstatic Object getValue(Object object, Field field) {

final String fieldName = field.getName();

returnprocessGetMethod(object, fieldName);

}

 

privatestatic Object processGetMethod(Object object, String fieldName) {

final Class<?>objClass = object.getClass();

try {

final Method method = getGetterMethod(objClass, fieldName);

if (method == null) {

thrownew NoSuchMethodException();

}

  1. method.setAccessible(Boolean.TRUE);
  2. returnmethod.invoke(object);

} catch (final Exception e) { }returnobjClass; }publicstatic Method getGetterMethod(Class<?>objClass, String fieldName) throws NoSuchMethodException { String methodName = getGetterName(fieldName); Method isMethodFound = findGetterMethod(objClass, methodName);if (isMethodFound == null) {methodName = getBooleanGetterMethod(fieldName);isMethodFound = findGetterMethod(objClass, methodName);if (isMethodFound == null || !(isMethodFound.getReturnType() == Boolean.class || isMethodFound.getReturnType() == boolean.class)) {thrownew NoSuchMethodException("No such method in the class " + objClass.getClass()); } }returnisMethodFound; }publicstatic String getGetterName(String fieldName) {return"get" + fieldName.substring(0, 1).toUpperCase() + fieldName.substring(1); }privatestatic Method findGetterMethod(Class<?>objClass, String methodName) throws SecurityException {if (objClass == null) {returnnull; }for (final Method method : objClass.getDeclaredMethods()) {if (isMethodMatched(method, methodName)) {returnmethod; } }returnfindGetterMethod(objClass.getSuperclass(), methodName); }privatestaticboolean isMethodMatched(Method md, String methodName, Class<?>... paramTypes) {if (paramTypes == null) {paramTypes = new Class[0]; }final Class<?>[] mdParamTypes = md.getParameterTypes();if (mdParamTypes.length != paramTypes.length) {returnfalse; }for (inti = 0; i<mdParamTypes.length; i++) {if (!mdParamTypes[i].equals(paramTypes[i])) {returnfalse; } }

  1. returnmd.getName().equalsIgnoreCase(methodName);

}publicstatic String getBooleanGetterMethod(String fieldName) {return"is" + fieldName.substring(0, 1).toUpperCase() + fieldName.substring(1); }publicstaticboolean compareValues(Object first, Object second) {if (first == null) {if (second == null) {returntrue; }returnfalse; }booleanresult;if (second == null) {result = false; } else {result = first.equals(second); }returnresult; }public Field[] getSupportedFields() {returngetSupportedFields(this.getClass()); }staticpublic Field[] getSupportedFields(Class<?>cl) {final Collection<Field>fieldCol = new ArrayList<Field>();for (; cl != null; cl = cl.getSuperclass()) {

  1. fieldCol.addAll(Arrays.asList(cl.getDeclaredFields()));

}final Field[] fields = new Field[fieldCol.size()];

  1. returnfieldCol.toArray(fields);

}

 

}


The above class works like this.


  1. First it will check whether the @Column annotation is present or not on the attribute
  2. @Id column should not be compared that does not make sense to compare, if they are equal then those are same entities, so skip it
  3. Skips whether the attributes name is present in the excluded columns.
  4. Then gets the corresponding value by calling get Value() method.
  5. getValue() method in turn calls to process the get method for that attributer, but to execute it we need to have getter method for that attribute.
  6. So again it calls to find the getter method based on the type, because as per the java naming convention the getter name for the Boolean names start with is.
  7. If the method name is not present in the current class then it checks for the super class.
  8. If it finds the getter method then it process the method and gets the values for that attribute, the same thing will be repeated for that other object for each attribute.
  9. Compare value method checks for the equality of the two getter method values.
  10. If any of them are not equals then it returns true otherwise returns false.

 

Now let’s check this. For this I wrote on demo class as below.

import java.util.ArrayList;

 

publicclass AttributeCompareDemo {

 

publicstaticvoid main(String args[]) {

 

EmployeeBE e1 = new EmployeeBE();

e1.setFirstName("Super");

e1.setLastName("Star");

 

EmployeeBE e2 = new EmployeeBE();

e2.setFirstName("Super");

e2.setLastName("Super");

 

booleanareEqual = AttributeCompareHelper.areObjectsEqual(e1, e2, new ArrayList<String>());

    System.out.println("Objects are equal: " + areEqual);

 

}

 

}

If you run the above program output will be like this:

 

Info 1.png

If you observe the output it’s false, because the last name is false;


Info 2.png

If you observe the screen shot the output is true, because I changed the last name.


In this way you can set many attributes and check.


Experts of Java development team have just shared their views about comparison between two business entities using JPA/ Reflection. If you want to ask anything or any point is left unexplained, please write to the experts and wait for their response.

 

Conclusion:

 

By using the java reflection mechanism we can compare many attributes without writing many compare statements. Not only JPA entities as I said above we can compare any other objects.


But we need to set the customizable things so that we can differentiate them.

An example to explain why we need AOP - Aspect Oriented Programming

$
0
0

The definition of AOP in wikipediaseems a little bit difficult for beginners to understand, so in this blog I use an example to introduce why we need it.

Suppose I have an order command class which performs its core business logic in method doBusiness:


package aop;
import java.util.logging.Level;
import com.sun.istack.internal.logging.Logger;
public class OrderCommand {
 public void execute(){  Logger logger = Logger.getLogger(OrderCommand.class);  logger.log(Level.INFO, "start processing");  // authorization check  logger.log(Level.INFO, "authorization check");  logger.log(Level.INFO, "begin performance trace");  // only this line implements real business logic  this.doBusiness();  logger.log(Level.INFO, "end performance trace");
 }
 private void doBusiness(){  System.out.println("Do business here");
 }
 public static void main(String[] args) {  new OrderCommand().execute();
 }
}


In method execute(), it is flooded with too many non-funcitional code like logging, authorization check and performance trace.

clipboard1.png

It is not a good design, we can try to improve it via template method pattern.

 

Template method pattern

 

With this pattern, I create a new parent class BaseCommand, and put all non-functional code inside the execute method.

 

import java.util.logging.Level;
import com.sun.istack.internal.logging.Logger;
public abstract class BaseCommand {
public void execute(){  Logger logger = Logger.getLogger(this.getClass());  logger.log(Level.INFO, "start processing");  // authorization check  logger.log(Level.INFO, "authorization check");  logger.log(Level.INFO, "begin performance trace");  // only this line implements real business logic  this.doBusiness();  logger.log(Level.INFO, "end performance trace");
}
protected abstract void doBusiness();
}

Now the real business logic is defined in child class OrderCommand, whose implementation is very clean:

 

public class OrderCommand extends BaseCommand {
public static void main(String[] args) {  new OrderCommand().execute();
}
@Override
protected void doBusiness() {    System.out.println("Do business here");
}
}

Drawback of this solution: as the parent class has defined the template method execution, it is NOT possible for a child class to adapt it, for example, a child class cannot change the order sequence of authorization check and performance trace method. And suppose a child class does not want to implement authorization check at all - this could not be achieved with this solution. We have to use decorator pattern instead.

 

Decorator pattern

 

First I need to create an interface:

 

public interface Command {
public void execute();
}

And create a decorator to cover the log and authorization check function:

 

import java.util.logging.Level;
import com.sun.istack.internal.logging.Logger;
public class LoggerDecorator implements Command{
private Command cmd;
public LoggerDecorator(Command cmd){  this.cmd = cmd;
}
@Override
public void execute() {  Logger logger = Logger.getLogger(this.getClass());  logger.log(Level.INFO, "start processing");  // authorization check  logger.log(Level.INFO, "authorization check");  this.cmd.execute();
}
}

And a second decorator for performance trace:

 

package aop;
import java.util.logging.Level;
import com.sun.istack.internal.logging.Logger;
public class PerformanceTraceDecorator implements Command{
private Command cmd;
public PerformanceTraceDecorator(Command cmd){  this.cmd = cmd;
}
@Override
public void execute() {  Logger logger = Logger.getLogger(this.getClass());  logger.log(Level.INFO, "begin performance trace");  this.cmd.execute();  logger.log(Level.INFO, "end performance trace");
}
}

And the class to finish the real business logic. Now I have the full flexibility to constructor the instance according to real business case, with the help of different decorator. The following instance fullCmd owns the ability of both authorization check log and performance trace.

 

public class OrderCommand implements Command {
public static void main(String[] args) {  Command fullCmd = new LoggerDecorator( new PerformanceTraceDecorator( new OrderCommand()));  cmd.execute();
}
@Override
public void execute() {  System.out.println("Do business here");
}
}

Suppose in a given scenario, only performance trace is needed, we can just use the performance trace decorator:

 

Command cmd = new PerformanceTraceDecorator( new OrderCommand());
cmd.execute();

Drawback of decorator pattern: The decorator class and the business class have to implement the same interface, command, which is more business related. Is there possibility that the utility classes for non-functional implementation can just work without implementing the same interface which is implemented by business class?

AOP solution

I use a Java project implemented by Spring framework to demonstrate the idea. The whole source code of this project could be found from git repository.

 

Suppose I hope to add performance trace on this business method: save.

clipboard2.png

1. You may have already observed the annotation @Log(nameI042416="annotation for save method") used in line10.

 

This annotation is declared in file Log.java:

clipboard1.png


2. Now I have to declare an Aspect class which contains a pointcut. A pointcut tells Spring framework which methods could be applied with AOP strategy, and the class annotated with @Aspect contains methods which should be called by Spring framework to "decorate" the methods identified by annotation. Here below I have declared a pointcut "logJerry" via annotation @Pointcut:

clipboard2.png

For example, since we have annotated the business method save() with "@Log(nameI042416="annotation for save method")",

we can define what logics must be done on it, with the help of @Before and @After plus declared pointcut.

clipboard3.png

With this approach, I can add performance trace function to save method without modifying its source code.

Set breakpoint on these beforeExec and afterExec methods, launch the project with Tomcat under debug mode, paste the following url to your browser:

 

http://localhost:9498/springaop/aopRootJerry/aop2Jerry/i042416?string=sap

Through callstack you can understand how the AOP call is working in Spring.

clipboard4.png

clipboard5.png

Why we say AOP can increase modularity by allowing the separation of cross-cutting concerns?

Suppose we have lots of methods all of which need common utilities like log, performance trace and authorization check. Before we use AOP, these utilities are scattered in every method:

clipboard6.png


After AOP is used, those common stuff are extracted into Aspect class and reusability is fulfilled. From the picture below we can see the cross-cutting concerns are now separated.

clipboard7.png

A real example of using volatile keyword in Java

$
0
0

Consider the following example:

 

package thread;
public class ThreadVerify {  public static boolean  stop = false;  public static void main(String args[]) throws InterruptedException {      Thread testThread = new Thread(){            @Override            public void run(){              int i = 1;              while(!stop){               //System.out.println("in thread: " + Thread.currentThread() + " i: " + i);                  i++;              }              System.out.println("Thread stop i="+ i);           }         };         testThread.start();         Thread.sleep(1000);         stop = true;         System.out.println("now, in main thread stop is: " + stop);         testThread.join();     }
 }

The working thread is started to do increment on i and after one thread, the flag is set as true in main thread. It is expected that we could see print out in working thread: "Thread stop i=". Unfortunately it is NOT the case.

 

Through process explorer we can find the working thread is still running:

clipboard1.png

The only way we can terminate it is to click this button in Eclipse:


clipboard2.png

The reason is: every thread in Java has its own thread local stack in runtime. Every time a thread tries to access the content of a variable, it first locates the variable content in the main memory, then loads this content from main memory to its local stack. Once this load is done, this relationship between thread local stack and main memory is cut.

 

Later, when thread performs modifications on the variable, the change is directly done on thread local stack. And at a given time ( scheduled by JVM, developer has no control about this timeslot), the change will be refreshed from thread local stack to memory.  Back to our example, already the main thread has changed the flag to true in one second later ( this is TOO late! ), unfortunately when the working thread reads the flag from its own local stack, the flag is still false ( it makes sense since when it is copied from main memory in main thread ), so the working threads could never end. See the following picture for detail.

clipboard3.png

Solution is: add keyword volatile to flag variable, to force the read access on it in working thread is done always from main memory, so that after the flag is changed to true in main thread, later the working thread can detect this latest change since it reads data from main memory now.

 

After this keyword is added we can get expected output:

clipboard4.png

How to find the exact location where bean configuration file is parsed in Spring framework

$
0
0

We can define bean configuration in xml and then can get instantiated bean instance with help of all kinds of containers for example ClassPathXmlApplicationContext as displayed below:

clipboard1.png

The content of Beans.xml:

<?xml version="1.0" encoding="UTF-8"?><!--  http://stackoverflow.com/questions/18802982/no-declaration-can-be-found-for-element-contextannotation-config
 --><beans xmlns="http://www.springframework.org/schema/beans"    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"     xmlns:context="http://www.springframework.org/schema/context"    xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context.xsd">       <bean id="helloWorld" class="main.java.com.sap.HelloWorld">       <property name="message" value="sss"/>       <property name="testMin" value="2"/>       <property name="phone" value="1"/>   </bean></beans>

 

Where can we set breakpoint to start? No hint. Here is a tip: we can make the Beans.xml invalid by deliberately changing te tag bean to beana, and relaunch application. Now exception is raised as expected: Click the hyperlink XmlBeanDefinitionReader.java:399,

clipboard2.png

The line 399 where exception is raised will be automatically located. The core logic to load xml file is just near the exception raise position: line 391. So we can set breakpoint in line 391 now:

clipboard3.png

Change the tag from beana back to bean, and start application via debug mode. The code below is the core logic of Bean configuration file parse in Spring framework. The logic consists of two main steps:

 

1. parse XML as a dom structure in memory ( line 391 )

2. extract bean information contained in dom structure and generate BeanDefinition structure ( line 392 )

clipboard4.png


from screenshot below we can find out the xml is parsed via SAX parser:

clipboard5.png

My "helloWorld" bean is parsed here:

clipboard6.png

clipboard7.png

How does component-scan work in Spring Framework

$
0
0

In Spring configuration xml file, we can define a package for tag component-scan, which tells Spring framework to search all classes within this specified package, to look for those classes which are annotated with @Named or @Component.

clipboard1.png

I am very curious about how Spring framework achieves this scan, so I have made some debugging to figure it out.

 

In this blog How to find the exact location where bean configuration file is parsed in Spring frameworkI have already found the location where the xml configuration file is parsed by Spring framework, so I can directly set breakpoint in found source code.

Here the package to be scanned is parsed from xml file:

clipboard2.png

And the actual scan is performed in line 87:

clipboard3.png

Here all classes within the specified package and its children packages are extracted as resource, now I have 7 resources as candidates for scan which makes sense since I have totally 7 classes in the package:

clipboard4.png

clipboard5.png

The evaluation to check whether the class has qualified annotation is done in this method:

clipboard6.png

If the scanned class has at least one annotation ( the annotation written on class is stored in metsadataReader ) which resides in this.includeFilters, then it is considered as a candidate.

clipboard7.png

By inspecting content of this.includeFilters, we can know that Spring framework considers @Component and @Named as qualified annotation for automatic component scan logic.

clipboard8.png

clipboard9.png

Back to my example, since my bean class has @named annotated,

clipboard10.png

In the runtime, this annotation written in class source code is extracted via reflection, and checked against Spring framework pre-defined annotation set. Here below is how my bean class evaluated as candidate, since it has @Named annotation.

clipboard11.png

clipboard12.png


How does @Autowired work in Spring framework

$
0
0

Suppose I have a bean named HelloWorld which has a member attribute points to another bean User.

 

clipboard1.png

With annotation @Autowired, as long as getBean is called in the runtime, the returned HelloWorld instance will automatically have user attribute injected with User instance.

clipboard2.png

How is this behavior implemented by Spring framework?

 

1. in Spring container implementation's refresh method, all singleton beans will be initialized by default.

clipboard3.png


When the HelloWorld bean is initialized:

clipboard4.png

Since it has the following source code:

@Autowired 
private User user;


In the runtime, this annotation is available in metadata via reflection. In metadata structure below, the targetClass points to HelloWorld bean, and injectedElements points to the User class to be injected.

clipboard5.png

clipboard6.png

2. In doResolveDependency, the definition for User bean is searched based on this.beanDefinitionNames ( list in DefaultListableBeanFactory ):

clipboard7.png

Once found, the found result is added to array candidateNames:

clipboard8.png

clipboard9.png

Then the constructor of User bean class is called ( still triggered by getBean call ), the user instance is created by calling constructor:

clipboard10.png

clipboard11.png

The created user instance together with its name "user" is inserted to the map matchingBeans.

clipboard12.png

clipboard13.png

3. Finally the user reference is set to user attribute of HelloWorld instance via reflection. Here the variable bean in line 569 points to HelloWorld instance, and value points to user instance.

clipboard14.png

Once field.set(bean, value) is done, we can observe in debugger that the user attribute in HelloWorld instance is already injected successfully.

clipboard15.png



HANA Hibernate Exception.

$
0
0

I am trying to connect to HANA via hibernate java, i am using reverse engineering method.

First there is no driver in existing drivers for HANA, so i use generic jdbc and set the properties for HANA db, but an exception is thrown:

com.sap.db.jdbc.exceptions.jdbc40.SQLInvalidAuthorizationSpecException: [10]: authentication failed

    at com.sap.db.jdbc.exceptions.jdbc40.SQLInvalidAuthorizationSpecException.createException(SQLInvalidAuthorizationSpecException.java:40)

    at com.sap.db.jdbc.exceptions.SQLExceptionSapDB.createException(SQLExceptionSapDB.java:301)

    at com.sap.db.jdbc.exceptions.SQLExceptionSapDB.generateDatabaseException(SQLExceptionSapDB.java:185)

    at com.sap.db.jdbc.packet.ReplyPacket.buildExceptionChain(ReplyPacket.java:102)

    at com.sap.db.jdbc.ConnectionSapDB.execute(ConnectionSapDB.java:1030)

    at com.sap.db.jdbc.ConnectionSapDB.execute(ConnectionSapDB.java:820)

    at com.sap.db.util.security.AbstractAuthenticationManager.connect(AbstractAuthenticationManager.java:43)

    at com.sap.db.jdbc.ConnectionSapDB.openSession(ConnectionSapDB.java:569)

    at com.sap.db.jdbc.ConnectionSapDB.doConnect(ConnectionSapDB.java:422)

    at com.sap.db.jdbc.ConnectionSapDB.<init>(ConnectionSapDB.java:174)

    at com.sap.db.jdbc.ConnectionSapDBFinalize.<init>(ConnectionSapDBFinalize.java:13)

    at com.sap.db.jdbc.Driver.connect(Driver.java:307)

    at org.eclipse.datatools.connectivity.drivers.jdbc.JDBCConnection.createConnection(JDBCConnection.java:328)

    at org.eclipse.datatools.connectivity.DriverConnectionBase.internalCreateConnection(DriverConnectionBase.java:105)

    at org.eclipse.datatools.connectivity.DriverConnectionBase.open(DriverConnectionBase.java:54)

    at org.eclipse.datatools.connectivity.drivers.jdbc.JDBCConnection.open(JDBCConnection.java:96)

    at org.eclipse.datatools.connectivity.drivers.jdbc.JDBCConnectionFactory.createConnection(JDBCConnectionFactory.java:53)

    at org.eclipse.datatools.connectivity.internal.ConnectionFactoryProvider.createConnection(ConnectionFactoryProvider.java:83)

    at org.eclipse.datatools.connectivity.internal.ConnectionProfile.createConnection(ConnectionProfile.java:359)

    at org.eclipse.datatools.connectivity.ui.PingJob.createTestConnection(PingJob.java:76)

    at org.eclipse.datatools.connectivity.ui.PingJob.run(PingJob.java:59)

    at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

 

 

Any help is appreciated.

Viewing all 50 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>