Java Memory Management

Java Memory Model

What’s in the memory of a running Java application?

Before we look into the details, we can guess that the following component is needed for an application to run:

  1. The code: Java code is not complied or executed in native code format, the code is converted into class file and executed through the execution engine. So the code must be loaded into the memory to execute.
  2. JVM: the Java Virtual Machine itself.
  3. Stack of the current thread, it stores current status of the execution, the running code, the local varaibles, and the trace of the execution.
  4. Data objects: when java code is run, the objects needs to be initiated to track he data, and they should not be in the stack area, therefore another area should be reserved fro the data objects.

We are pretty close with the Java memory model already with the above reasoning. Actually Java memory model has the following model:

  • Method area

this areas store all Java class level information, including the class name, it’s immediate parent class name. This area is one per JVM, and shared by different threads.

  • Heap area

this areas is used for all the objects that is instantiated. There is also one Heap Area per JVM. For example, all the static string lives on the Heap area so that they can be referred and shared.

  • Stack area

There is one running stack for each thread, each block is created as a new function is invoked.

  • PC Register

This area stores the address of current execution instruction of a thread. This is also one per thread.

Java GC

How does Java GC work?

Java program runs in Java Virtual Machine, Java Virtual Machine manages the memory allocation and recycle of the programs. In C/C++ programming, the program need to call alloc() and free() method to allocate and release the memories. JVM handles the memory allocation and recycle for the programs, it introduced the garbage collector for recycling the unused memories.

The most critical question is how does Java know whether the memory area is ready to be recycled? Java GC will scan the Stack area, if the object in the Heap is reachable from the stack, it means this object is still being referred and it should not recycle, otherwise, they area can be recycled.

What are some of the algorithms that Java GC use?

Mark and Sweep

In phase I of this algorithm, the Java GC traversal the heap area to decide which one is still reachable.

In phase II of this algorithm, the Java GC will mark the rest of the area as recyclable, and these areas will be overwrite next time a data allocation request is sent.

Does Java GC compacting the memory to improve the later on memory allocation performance? Yes, it does. The mark-compact algorithm would try to move the referred area to the start of the heap and so that the later on memory allocation is in a continuous chunk of area and the new allocation is faster.

The problem of Mark and Sweep algorithm is that is can become quite inefficient. If the objects size becomes big or the program becomes complex, traversal the diagram and clean them up could take a long time.

Generational Garbage Collection

Generational Garbage Collection is added on top of the Mark and Sweep algorithm. Most of the java objects are short lived, and if they are short lived, they most likely will leave forever.

Generational GC divides the Heap into a few areas: the younger generation, the old generation and the Permanent generation.

The objects are first allocated in the younger generation, which contains the eden, s1 and S2 area. If the size fills up, a minor GC is triggered to recycle the data, and if the objets survived, it is gradually pushed to older generation.

The old generation is used to store the long running objects, only when the object in the young generation reached certain age, they are added to the old generation. And a major GC is run to perform GC on old generation.

The permanent generation is used to store the metadata required for JVM to run, for example, the class file.

From Java 8 on, the permanent generation is removed and replaced with the metadata space, and method area is part of this area.

Java Memory Tuning

Java GC tpes

Java provides different types of GC:

  • The Serial GC

This is the default GC, and it runs in a single thread and in serial for different GC tasks.

  • The parallel GC

This GC runs in multiple thread and runs the GC in parallel.

  • The Concurrent Mark Sweep Collector

This GC is used to collect the tenured generation.

How do we tune java memory?

  • -Xmx: set the maximum heap size
  • -Xms: set the starting heap size
  • -XX:MaxPermSize: java 7 and below
  • -X:MetaspaceSize the meta space size.
  • -verbose:gc print to the console when a garbage collection is run.
  • -Xmn set the size of the young generation
  • -XX: HeapDumpOnOutOfMemory create a heap dump file

Java Stream

What is stream?

Stream is an abstraction of data operations. It takes input from the collections, Arrays of I/O channels.

From imperative to declarative

For example, given a list of people, find out the first 10 people with age less or equals to 18.

The following solution is the imperative approach:

public void imperativeApproach() throws IOException {
        List<Person> people = MockData.getPeople();

        List<Person> peopleAbove18 = new ArrayList<>();
        for (Person person : people) {
            if (person.getAge() <= 18) {
                peopleAbove18.add(person);
            }
        }

        for (Person person: peopleAbove18) {
            System.out.print(person);
        }
}

The following is the declare approach style:

public void declareApproach() throws IOException {
        List<Person> people = MockData.getPeople();
       people.stream()
                    // a lambda function as a filter
                  .filter(person -> person.getAge() <= 18)
                  .limit(10)
                  .collect(Collectors.toList())
                  .forEach(System.out::print);
}

Abstraction

We mentioned that stream is an abstraction to the data manipulation. It abstract them in the following way:

  • Concrete: can be the Sets, Lists, Maps, etc
  • Stream: can be filter, map, etc.
  • Concrete: collect the data to make it concrete again.

Intermediate and Terminate Operation

Java stream has different operation units:

  • Intermediate operators: map, filter, sorted
  • Terminators: collect, foreach, reduce

Each intermediate operation is lazily executed and return a stream, until a terminal operation is met.

Range

With IntStream.range(), you can create a stream with fixed set of elements, for example:

    public void rangeIteratingLists() throws Exception {
        List<Person> people = MockData.getPeople();

        // Use int stream to loop through the list and print the object.
        IntStream.range(0, 10).forEach(i -> System.out.println(people.get(i)));

        // If you want to use for the first elements
        people.stream().limit(10).forEach(System.out::println);
    }

You can also iterate the function for the given number times:

    public void intStreamIterate() throws Exception {
        // This is very much like the fold function on Kotlin,
        // that it keep iterating based on the iterator you provided.
        IntStream.iterate(0, operand -> operand + 1).limit(10).forEach(System.out::println);
    }

Max, Min and Comparators

Java stream provides built in Min/Max function that support customized comparators. For example:

    public void min() throws Exception {
        final List<Integer> numbers = ImmutableList.of(1, 2, 3, 100, 23, 93, 99);

        int min = numbers.stream().min(Comparator.naturalOrder()).get();

        System.out.println(min);
    }

Distinct

Sometimes, we would like to get the distinct elements from the stream, then we could use the distinct api of the stream

  public void distinct() throws Exception {
    final List<Integer> numbers = ImmutableList.of(1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 9, 9, 9);

    List<Integer> distinctNumbers = numbers.stream()
        .distinct()
        .collect(Collectors.toList());

  }

Filtering and Transformation

Stream filter api enables you to filter the content of the element, for example:

    public void understandingFilter() throws Exception {
        ImmutableList<Car> cars = MockData.getCars();

        // Predicate is an assertion that returns true or false
        final Predicate<Car> carPredicate = car -> car.getPrice() < 20000;

        List<Car> carsFiltered = cars.stream()
            .filter(carPredicate)
            .collect(Collectors.toList());

And map API enable you to transform the format of the element, for example, we could define a another object and transform the given stream to the targeted stream:

    public void ourFirstMapping() throws Exception {
        // transform from one data type to another
        List<Person> people = MockData.getPeople();

        people.stream().map(p -> {
            return new PersonDTO(p.getId(), p.getFirstName(), p.getAge());
        }).collect(Collectors.toList());

    }

Group Data

One common function in SQL queries are data grouping, for example:

SELECT COUNT(*), TYPE FROM JOB WHERE USER_ID = 123 GROUP BY TYPE

Java stream provides similar functionalities:

  public void groupingAndCounting() throws Exception {
    ArrayList<String> names = Lists
        .newArrayList(
            "John",
            "John",
            "Mariam",
            "Alex",
            "Mohammado",
            "Mohammado",
            "Vincent",
            "Alex",
            "Alex"
        );

    Map<String, Long> counting = names.stream()
        .collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));

    counting.forEach((name, count) -> System.out.println(name + " > " + count));
  }

Reduce and Flatmap

Very similar to the Hadoop Map/Reduce job, where map take care of transformation of the data, while the reduce job collect the data and do the final computation. For example:

  public void reduce() throws Exception {
    Integer[] integers = {1, 2, 3, 4, 99, 100, 121, 1302, 199};

     // Compute the same of the elements, with the initial element as a
    int sum = Arrays.stream(integers).reduce(0, (a, b) -> a + b);
    System.out.println(sum);

    // use the function reference
    int sum2 = Arrays.stream(integers).reduce(0, Integer::sum);
    System.out.println(sum2);

  }

Flat map is different from the map function that it could flat the internal structure first.

For example:

List<List<String>> list = Arrays.asList(
  Arrays.asList("a"),
  Arrays.asList("b"));
System.out.println(list);

System.out.println(list
  .stream()
  .flatMap(Collection::stream)
  .collect(Collectors.toList()));

The result of the stream is a String List.

Lambda Expression in Java/Kotlin

Higher-order function

In computer science, a higher-order function is a function that does at least one of the following:

  • Takes one or more functions as arguments
  • Returns a function as its result.

All other functions are first-order functions.

Anonymous class

In Java, the anonymous class enables you to declare and instantiate a class at the same time. If you only need to use a local class once, then you should use the anonymous class. For example, if you wish to define a runnable class to execute a task:

Executors.NewSingleThreadExecutor().execute(new Runnable() {
    @Override
    public void run() {
        // Your task execution.
    }
})

As you can see on the above example, the Runnable is a interface with one function run defined. The anonymous class implemented the interface.

Lambda Expression

Besides the anonymous class, Java also supports anonymous functions, named Lambda Expression.

While anonymous class enables you to declare the new class in a statement, it is sometimes not concise enough when there is only one function in the class.

For the example on the above section, we could simplify it’s implementation with a Lambda expression.

Executors.NewSingleThreadExecutor().execute(()-> {// Your task execution })

The lambda expression provides a few functionalities:

  • Enable to treat functionality as a method argument, or code as data
  • A function that can be created without belonging to any class.
  • A lambda expression can be passed around as if was an object and executed on demand.

In the mean time, the functions are first-class in Kotlin. They can be stored in variables and data structures, passed as arguments and returned from other higher-order functions.

The kotlin Lambda expression follows the following syntax:

  • It is always surrounded by curly braces,
  • Parameter declarations in full syntactic from go inside curly braces and have optional type annotations.
  • The body goes after an -> sign.
  • If the expression return type is not Unit, the last expression inside the body is treated as the return type.

As you can tell, the Lambda expression can’t specify the return types. If you wish the define the return type, you could use an alternative solution: anonymous function.

fun(x: Int, y: Int): Int = x + y

The major difference between Kotlin and Java is that Kotlin is a functional programming language. Kotlin has a dedicated type for functions, for example:

val initFunction: (Int) -> Int

The above expression means that the initFunctions is a function type, and the function takes in a integer and return a integer.

The above function be rewrite as:

val a = {i: Int -> i +1}

SRE: Data Integrity

 

Data integrity usually refers to the accuracy and consistency of data throughout its lifetime. For customer involved online services, things can go even more complex. Any data corruption, data loss, or extended unavailability are considered data integrity issue for the customer.

Data integrity in many cases can be a big problem. For instance, the database table was corrupted and we had to spend a few hours restore the data from the snapshot database. In another instance, the data was accidentally deleted and had a fatal impact on our client, as the client never expected the data to be unavailable. However, it was too expensive to restore the data, so we had to fix the dependent data record and some code on the client side to mitigate the impact on the clients. There is another instance that the data loaded for the client is not what they expected. This is clearly a data consistency issue. However, the issue was not reproducible and thus made it super hard for the team to debug.

There are many types of failure that could lead to the data integrity issue:

  • Root cause

User actions, Operator Error, Application Bug, Infrastructure defect, Hardware Failure, Site Disaster

  • Scope

Wide, Narrow, directed

  • Rate

Big Bang, Slow and Steady

This leads to the 24 combinations of the data integrity issue. How do we handle such issues?

First layer action is to adopt soft delete to the client data. The idea behind soft delete is to make sure that the data is recoverable if needed, for example, from operation errors. A soft delete is usually implemented through adding a is_delete flag and a deleted_at time stamp to the table. When data is to be deleted, they are not deconstructed from the database immediately, but will be marked as deleted with a scheduled deleted time in the future, say 60 days from the deletion. In this way, the data deletion could be reverted if necessary.

There are different opinions about the soft deletion solution, as it might introduce extra complexity on the data management. For example, when there are hierarchies and dependency relationship between the data records, the deletion might break the data constraints. In the meantime, it makes the data selection and option more complex, as a customized filter has to be applied to the data in order to filter out the data that has been soft deleted. And recovering the soft delete data can also be complex especially only part of the data is deleted, a recovery might involve complex data merge.

The second layer action is to build the data backup system and make the recovery process fast. We need to be more careful here that the data backup or archive is not the purpose of data integrity. Find out ways to prevent the data loss, to detect data corruption, to quickly recover from data integrity instance is more important. Data backup is often times neglected as it yields no visible benefit and not a high priority for anyone. But building a restoring system is a much more useful goal.

For many cloud services, data backup is an option, for example, AWS RDS supports creating data snapshot, while the cloud cache Redis cluster supports backup the data on the EBS storage. Many people stop here as they assume that the data is currently back up. However, we should realize that the data recovery could take a long time to finish, and the data integrity is broken during the recovery time. The recovery time should be an important metric for the system.

Besides back up, many systems use replicas. And by failover to the replica when the primary node had an issue, they could improve the system availability. We need to realize that the data might not be consistent between the primary instance and the replica instance.

A third layer is to detect the error earlier. For example, have a data validation job that validates the integrity of the data between different storage systems so that the issue could be fixed quickly when it happens.

SRE: Service Level Objectives

As we are moving from monolithic service to the micro-services, I find it pretty useful to think about the following problems:

  • How do we correctly measure the service?

This question can be breakdown into the following sub questions:

  • If you are about to maintain your service, how do you tell whether your service is functioning correctly or not?
  • If your service is a client facing product, how do you know whether you provide a good experience for the client or not?
  • How do you know your service hit the performance bottleneck or not?
  • If you are about to scale the service performance, what metric should you use to find out the performance bottleneck?

As you find out from the above sub-questions​, it all about defining the metrics for the proper client and proper scenario. For sanity check of the service, it should be quick and straightforward, for product experience check, you should rather put metric where the product impact could be measured; for performance monitoring and optimization, finding out how to measure the resource utilization and the dependent service are essential metrics.

I found that many times we didn’t measure our service correctly. And it takes time, experience and domain knowledge to find define such metrics correctly.

  • How to manage the expectation of the clients that are using our service?

One common problem for monolithic service is that the integration many times are based on direct DB or data integration, where the client treat it as local DB where the data is always available. With this setting, the failure in one domain would be contagious: as the client never assume there will be failures from the domain and when failure happens, no exception handling logic is installed to handle such failures.

To make the system more resilient, a service level agreement or expectation is truly needed. It is about setting the expectation of you service client: we are not a service that is constantly performant and reliable, we could slowdown or not available at some case, and you should be prepared for that.

So I find it is pretty useful to think about these problem​s with the SLOs and related concepts:

  • SLIs: Service Level Indicators

The service level indicators are metrics you defined to measure the behavior and performance of your service. It can be product facing or purely engineering facing.

  • SLOs: Service Level Objectives

The service level objectives are objectives that are set based on the SLIs. It serve as the direction for the team to optimize the system if necessary.

  • SLAs: Service Level Agreement

The service level agreements are more about the contract you defined for the client: how fast your service could load the data on average, and at what case you service might fail and how your client should handle such failure.

Besides defining the SLIs and SLAs, it also provides a way to validate the adoption of the SLAs. For example, if your client are supposed to handle the data access failures from your service, then you can validate that by scheduled an outage of your service. And by doing so, you push your client to adopt to the SLAs.​

Book Review: Cloud Native Java

Cloud Native Java provides an overview of why people want to build cloud-native applications, and how to build them with Spring Boot and other frameworks.

A cloud-native application focus on the application logic that solves the business and product problems, without worrying about the underlying infrastructures setup and operation. Spring Boot is such a framework that provides a simple service bootstrapping process as it covers most of the complicated configuration process in order to bring up a service. It does not solve every engineering problem you will be facing, but it provides easy integrations with the best framework on the problem domains.

Spring Boot enables you to quickly build a REST application, you can have such an application up and running with half an hour. The easy to use reminds of some good frameworks in the Python web world like Falsk. You can easily equip your framework with different persistence solutions: SQL database like MySQL and PostgreSQL, NoSQL database like MongoDB, cache services like Redis. You can also build a data-driven application with the Spring message channel framework or integrating with messaging brokers like Kafka, RabbitMQ.

Spring makes such integration easy by providing you with the native abstraction frameworks to integrate with such systems, like Spring JPA for database access, Spring Data Cache for cache service access, and Spring Message Channel for streaming frameworks. These abstractions help you focus on the application logic without worrying about the underlying details.

Spring also utilize the well-built framework to solve the problems beyond the application logic. By integrating with OAuth, it provides robust authentication and authorization solution. By integrating with Eureka, it helps you solve the middle tier routing problems.

It is fair to say that this book is very comprehensive. It almost covers every aspect of the system that you need to build for a cloud application. The well-organized examples and annotations make this book a handbook for people who need to solve exactly the same problem.

It is a good guide book for people who try to understand how to build a Java application with Spring Boot on the cloud. It discusses some of the essential problems you need to consider and solve when building cloud native applications, and it demonstrated that these problems can be solved easily with spring boot. 

However, I don’t consider it as a good book for people who are exploring Spring Boot as an alternative solution for the engineering problems within a relatively established world.

The major issue is: the infrastructure components in such an organization mostly have already been built. The challenge for the engineers is to integrate the SpringBoot into the existing system.  For one example, OAthu is a good choice for authentication, but what if your company already have it’s own authentication system, for example, Django’s authentication framework where each client is viewed as a user?

I understand that it is very difficult to write such a book that fit for everyone’s use case. as every company is built differently, a solution works for one company might not work for another. And the most interesting challenge for the engineers is to solve the problem with a given context. 

The other problem of this book is that it lacks the in-depth discussions of the essential problems engineerings will be facing when building cloud-native applications. The discussion in this book is relatively superficial. For books that don’t provide the right solution, I expect them to provide inspiriation.

Overall, I recommend this book to people who are new to the industry and new to the Java web service programming world. For people who are looking for more in-depth solutions or discussions, this book is not a right choice.

Building Microservices Note: Integration

What is Microservices Integration ?

How one microservice can talk with another? In general, the following technologies could be used: shared database, RPC, and REST. But how do we pick up the right one?

The following are general answers to the questions:

  • Avoid breaking changes.

When we add a change on the server side, the client side should not be impacted unless necessary.

  • Keep your APIs technology agnostic

This is to avoid integration technology that dictates what technology stacks we can use to implement the micro services.

  • Make your service simple for consumers

Make it easy for the consumers to user the services.

  • Hide internal implementation details

Don’t have the consumers bound to the internal implementation.

With this guideline, let’s take a look at available options.

Integration technology

Shared Database

The most common form of integration is database(DB) integration: if other services want to retrieve or update the information from another service, they reach into the database.

This integration pattern definitely breaks many of the guidelines we set put:

  • It allows the external parties to view and bind to internal implementation details.
  • The customers are tied to a specific technology choice. For instance, if we decided to use Postgres, the consumer has to find a way to integrate with Postgres.
  • It would be very difficult to change the logic, the logic of parsing the data must be spreader across all consumers.

So in sum, shared database is never a good choice for micro service integration.

Remote Procedure Call

Remote Procedure Call(RPC) refers to the technique of making a local call and having it execute on a remote service somewhere.

The common used PRC techniques includes SOAP, Thrift, Protocol Buffers. They support separate interface definition, and client/server stub generation based on the definition.

Some of the technique bind to a specific networking protocol, while some allows you to use different type of protocols. For example, gRPC utilizes Protocol Buffer and HTTP2.

Some RPC mechanism are heavily tied to a specific platform, but some other mechanisms for example Thrift and protocol buffer support multiple languages.

So, what’s the major downside of RPC calls? The problem is, Local Calls are not like Remote Calls. The consumers often forget about one thing: the network is not reliable. The client needs to keep in mind that the remote server could be down, be slow or the request could be lost and handle these situations properly.

REST

Representational State Transfer(REST) is another technique. The most important concept of REST is resources. The REST service creates different representations of this resource on request.

REST and HTTP

REST as a design pattern does not talk about underlying protocols, although it is most commonly used over HTTP. The major reason is HTTP itself defines some useful capabilities that works very well with REST style, for example, the HTTP verbs: GET, POST, and PUT, and HTTP error codes.

There are, of course, downside of REST over HTTP. For server to service communication, if extremely low latency or small message size is important, HTTP might not be a good choice, as the message header can be big. TCP/UDP might be a better choice in these case.

Data Interchange

REST support many formats of data interchange framework, the most comply used one are JSON and XML. Application data are converted into JSON/XML format, and then transmitted to the other service.

JSON V.S. XML

Both JSON and XML are human readable and machine readable.

  • XML supprots store the metadata information with the tag.

For example, the type of the field, while JSON has to encode these metadata as an object and associate it with the data.

  • XML supports link while JSON does not.

Xlink is used to create hyperlink in XML documents, with Xlink, the link content can be defined outside the linked files. While JSON is purely data based, and does not support anything similar. For python, the objects has to be concretely rendered in multiple places even though they share the same content.

  • JSON is significantly less verbose than XML.
  • JSON supports array.

This make it easy to be mapped to the data structure in the programming languages, for example, array, list, hash map etc. . For example, JSON is directly interchangeable with python dict. JSON objects can be easily parsed and manipulated in almost all programming languages.