In The Name Of Allah The Beneficent The Merciful
Version 2 of the GlassFish Java EE Application Server contains many new features, among them enhanced clustering capabilities. The new clustering capabilities enhance high availability and scalability for deployment architectures through in-memory session state replication. With in-memory state replication, clustered server instances replicate session state in a ring topology, storing the replicated information in memory.
This article describes the clustering capabilities of GlassFish version 2 and helps you get started deploying your application to a GlassFish cluster.
Sun Java System Application Server 9.1 is the Sun-supported distribution of the open-source GlassFish version 2 application server. This article uses the name GlassFish version 2 to embrace both of them.
Clusters in an application server enhance scalability and availability, which are related concepts.
In order to provide high availability of service, a software system must have the following capabilities:
In order to support the goals of scalability and high availability, the GlassFish application server provides the following server-side entities:
Central to GlassFish clustering architecture is the concept of an
administrative domain. The administrative domain is a representation of
access rights for an administrator or group of administrators. The
following figure shows an overview of the domain administration
architecture, in the context of a single domain.
An administrative domain is a dual-natured entity:
In general, high-availability installations require clusters, not independent server instances. The GlassFish application server provides homogeneous clusters and enables you to manage and modify each cluster as though it were a single entity.
As shown in the figure, each domain has a Domain Administration Server (DAS), which is used to manage Java EE Server instances in the domain. The Administration Node at the center of the figure supports the DAS. Applications, resources, and configuration information are stored very close to the DAS. The configuration information managed by the DAS is known as the configuration central repository.
Each domain process must run on a physical host. When running, the domain manifests itself as a DAS. Similarly, every server instance must run on a physical host and requires a Java Virtual Machine. The GlassFish application server must be installed on each machine that runs a server instance.
Two nodes are shown on the right side of the figure: Node 1 and Node 2, each hosting two GlassFish server instances.
Each node agent controls the life cycles of the instances that are configured on its machine in a given domain. In general, each life cycle is managed by the DAS according to administrator requests. The DAS delegates the actual life cycle management of each instance to its corresponding node agent. A node agent is a lightweight process that does not itself run Java EE applications.
In addition to controlling instance life cycles, a node agent monitors ("watchdogs") the server instances it is responsible for. If a server instance fails, its node agent brings it back up — without requiring administrator or DAS intervention.
Several administrative clients are shown on the left side of Figure 1. The administrative infrastructure in the DAS is based on Java Management Extensions (JMX) technology. The infrastructure in the DAS follows the instrumentation level of the JAX specification and employs management information in the form of Managed Beans (MBeans), Java objects that represent resources to be managed.
Because the MBeans are compliant with the JMX standard, you can browse them with any remote standard JMX Client (such as JConsole, which is distributed with Java SE 5.0 upwards). The built-in clients shown in Figure 1 use the JMX API to manage the domain. These clients need administrator privileges in order to manage the domain. The following administrative clients are of interest:
Figure 2 shows GlassFish clustering architecture from a runtime-centric viewpoint.
This view emphasizes the high-availability aspects of the
architecture. The DAS is not shown in Figure 2, and the nodes with their
application server instances are shown to be grouped as clustered instances.
At the top of Figure 2, various transports ( HTTP, JMS, RMI-IIOP) are shown communicating with the clustered instances through a load balancing tier. Custom resources, such as enterprise information systems, connect to the load balancer through resource adapters in the Java connector architecture. All of the transports can be load balanced across the cluster, both for scalability and for fault tolerant strategies implemented by redundant units available upon single-point failure.
At the bottom of the figure is a High-Availability Application State Repository, an abstraction of session state storage. The repository stores session state, including HTTP session state, stateful EJB session state, and single sign-on information. This state information can be stored either by means of memory replication or a database.
Requests for a lighter weight, open-source alternative to accompany the open-source GlassFish application server have resulted in a memory replication feature for GlassFish version 2. Memory replication relies on instances within the cluster to store state information for one another in memory, not in a database. The HADB solution remains available, however, and may be preferred for many installations.
The memory replication feature takes advantage of the clustering feature of GlassFish to provide most of the advantages of the HADB strategy with much less installation and administrative overhead.
In the GlassFish application server, cluster instances are organized in a ring topology. Each member in the ring sends memory state data to the next member in the ring, its replica partner, and receives state data from the previous member. As state data is updated in any member, it is replicated around the ring. The topology is shown in simplified form in Figure 3.
The way the topology is formed into a ring is determined by alphanumeric order of the names you give to your instances. So, if you name your instances as shown in Figure 3, Instance 1 will replicate to Instance 2, Instance 2 to Instance 3, and so on around the ring.
A typical cluster topology is shown in Figure 4. In the figure, instances are shown hosted on different physical machines. By placing Instances 1 and 3 on one machine and Instances 2 and 4 on a different machine, you maximize availability. If either machine fails catastrophically, all the data is preserved on the other machine, either in its original form or as replicants of the instances on the failed machine.
The GlassFish application server has been designed so that the load
balancer tier requires no special information in order to perform well
when a failure occurs. For example, the load balancer, having routed a
session to Instance 1, does not need to know that it should route the
session to Instance 2 when Instance 1 fails. The load balancer can issue
a failover request to any instance in the cluster, a situation often
described as location transparency. Response to a failure occurs in the
cluster. When the load balancer reroutes a session to a working
instance, that instance obtains the stored session information it needs
from another instance, if necessary.
Failover requests from a load balancer fall into one of two cases:
Figure 6 illustrates failover Case 1, in which a rerouted server instance has immediate access to session state data. In the figure, Instance 1 has failed, and the load balancer's request for service happens to be routed to Instance 2, which has a replica of the required session state information.
Figure 7 illustrates failover Case 2, in which the load balancer tier reroutes a session to a server instance that does not have immediate access to session state data. In the figure, Instance 4 recognizes that it does not have the necessary session state data and broadcasts a SASE to other instances in the cluster, requesting the data. The request is illustrated with yellow arrows.
One of the instances (Instance 2 in Figure 7) recognizes that its replica contains the required data, and replies to the SASE request. Instance 2 transfers the session data to Instance 4, which then services the session.
Whenever an instance uses replica data to service a session (both Case 1 and Case 2), the replica data is first tested to make sure it is the current version.
When an instance in a cluster fails or has been deliberately
taken offline by an administrator, the topology of the cluster
necessarily changes.
In our example, because Instance 1 has failed, the topology of the cluster must change to maintain session cache replication. In Figure 8, Instance 2 and Instance 4 learn that Instance 1 has disappeared. Because Instance 1 has failed, attempts to communicate with it fail with I/O exceptions. If an instance is taken down deliberately, JXTA technology sends messages that pipes to Instance 1 have been closed.
In response to the disappearance of Instance 1, Instance 4 selects a new replication partner, as shown in Figure 9. Instance 4 cleans up its old connections and establishes connection to Instance 2. The cluster has now shrunk from 4 to 3 server instances.
Note that each instance in the smaller cluster now does more work given the same amount of overall session activity. For resource planning, recognize that in-memory replication uses heap memory. To provide high availability, ensure that you have sufficient memory headroom for each instance in the event that the cluster must shrink.
When an instance joins (or rejoins) the cluster, the process essentially occurs in reverse. When a new instance in the cluster receives a request from the load balancer tier, the instance broadcasts a request for a replication partner, selects one, and the topology adjusts automatically to embrace the new instance.
Group Management Service (GMS) provides dynamic membership information
about a cluster and its member instances. Its design owes much to
Project Shoal,
a clustering framework based on Java technology. At its core, GMS is also
based on JXTA technology.
GMS manages cluster shape change events in GlassFish, coordinating such events as members joining, members shutting down gracefully, or members failing. Through GMS, memory Replication takes necessary action in response to these events and provides continuous availability of service.
GMS is used in GlassFish Application Server to monitor cluster health and supports the memory replication module.
In summary, GMS provides support for the following:
To configure cluster memory replication, you must perform
three steps:
Some additional tuning may be required. For example, the default heap size for the cluster admin profile is 512 MB. For an enterprise deployment, this value should be increased to 1 GB or more. This is easily accomplished through the domain admin server by setting JVM options with the following tags:
You also need to be sure to add the
The requirement to insert the
In the GlassFish version 2 application server, the memory replication
feature is based on the transport and messaging capabilities of JXTA
technology.
JXTA technology is familiar to many as a peer-to-peer technology. It is defined as a set of XML-based protocols that allow devices connected to a network to exchange messages and collaborate regardless of the network topology. In developing GlassFish version 2 Application Server, JXTA technology was streamlined to handle the high volume and throughput requirements of memory replication. To improve scalability and performance, developers of the memory replication feature also benefited from collaboration with the Grizzly Project, which helps developers build scalable, robust servers with the Java New I/O API (NIO).
Group membership abstractions in JXTA technology map well to the GlassFish Application Server cluster and instances model: JXTA groups map to GlassFish clusters and JXTA peers map to GlassFish server instances. GMS takes advantage of these group membership abstractions and provides consuming components such as memory replication, a notification event model for runtime events in the cluster.
In development of GlassFish version 2 Application Server, clustering topologies have been limited to a single subnet. Future plans include leveraging JXTA to include geographic dispersal of clustering topologies.
Finally, the straightforward APIs of JXTA technology made possible the very simple configuration requirements for GlassFish clustering.
To install the GlassFish Application Server:
The installation directory contains two
The
To create a default domain with a clustering profile:
You can learn about and manage domains from the CLI (the
The configuration step created a
You can interact with domains from the CLI with the
For example, you can list all domains and their statuses with the following command:
If you haven't started domain1 yet, the above command issues the following output:
To start domain1, type the following command:
The argument
As an alternative to the
The Admin Console makes it easy to deploy applications from
You can add clustering support to an existing domain. A domain with developer profile does
not support clustering unless you alter its configuration. From the GlassFish installation directory,
you can create a developer profile domain with the following command:
To enable clustering from a developer profile domain:
A load balancer distributes the workload among multiple
application server instances, increasing the overall
throughput of the system. Although the load balancer tier
requires no special knowledge when routing session requests
to server instances, it does need to maintain a list of
available nodes. If a node fails to reply to a request as
expected, the load balancer picks another node.
Load balancers can be implemented in software or hardware. Refer to information supplied by hardware vendors for details about implementing their devices.
An HTTP load balancer plug-in is available for GlassFish version 2 Application Server. The plug-in works with Sun Java System Application Server 9.1 as well as Apache Web Server and Microsoft IIS. The load balancer also enables requests to fail over from one server instance to another, contributing to high-availability installations.
For more information about how to set up the load balancer plug-in, refer to the online help available from the Sun Java System Application Server 9.1 Admin Console. For more detailed information, see Chapter 5, Configuring HTTP Load Balancing, in Sun Java System Application Server 9.1 High Availability Administration Guide.
The GlassFish version 2 Application Server provides a flexible
clustering architecture composed of administrative domains, domain
administrative servers, server instances, and physical machines.
The architecture combines ease of use with a high degree of
administrative control to improve high availability and horizontal
scalability.
Thanks to Larry White, Abhijit Kumar, and Dinesh Patil for their help in preparing this article.
Version 2 of the GlassFish Java EE Application Server contains many new features, among them enhanced clustering capabilities. The new clustering capabilities enhance high availability and scalability for deployment architectures through in-memory session state replication. With in-memory state replication, clustered server instances replicate session state in a ring topology, storing the replicated information in memory.
This article describes the clustering capabilities of GlassFish version 2 and helps you get started deploying your application to a GlassFish cluster.
Sun Java System Application Server 9.1 is the Sun-supported distribution of the open-source GlassFish version 2 application server. This article uses the name GlassFish version 2 to embrace both of them.
Basic Concepts
In order to provide high availability of service, a software system must have the following capabilities:
- The system must be able to create and run multiple instances of
service-providing entities. In the case of application servers, the
service-providing entities are Java EE application server instances
configured to run in a cluster, and the service is a deployed Java EE
application.
- The system must be able to scale to larger deployments by adding
application server instances to clusters in order to accept increasing
service loads.
- If one application server instance in a cluster fails, it must be
able to fail over to another server instance so that service is not
interrupted. Although failure of a server instance or physical machine
is likely to degrade overall quality of service, complete interruption
of service is not acceptable in a high-availability environment.
- If a process makes changes to the state of a user's session, session state must be preserved across process restarts. The most straightforward mechanism is to maintain a reliable replica of session state so that, if a process aborts, session state can be recovered when the process is restarted. The principle is similar to that used in high-reliability RAID storage systems.
In order to support the goals of scalability and high availability, the GlassFish application server provides the following server-side entities:
- Server Instance – A server instance is the Java EE server
process (the GlassFish application server) that hosts your Java EE
applications. As required by the Java EE specification, each server
instance is configured for the various subsystems that it is expected to
run.
- Node Agent – A node agent is an agent process that runs on
every physical host where a server instance runs. The node agent manages
the life cycle of a server instance when directed by the Domain
Administration Server (DAS) described later in this article.
- Cluster – A cluster is a logical entity that determines the configuration of the server instances that make up the cluster. Usually, the configuration of a cluster implies that all the server instances within the cluster have homogeneous configuration. An administrator typically views the cluster as a single entity and uses the GlassFish Admin Console or a command-line interface (CLI) to manage the server instances in the cluster.
Domain Administration Architecture
Figure 1. Domain Administration Architecture
|
An administrative domain is a dual-natured entity:
- Used by a developer, it provides a fully featured Java EE process in which to run your applications and services.
- Used in a real-world enterprise deployment, it provides a process that is dedicated to configuration and administration of other processes. In this case, an administrative domain takes the form of a Domain Administration Server (DAS) that you can use purely for administration purposes.
In general, high-availability installations require clusters, not independent server instances. The GlassFish application server provides homogeneous clusters and enables you to manage and modify each cluster as though it were a single entity.
As shown in the figure, each domain has a Domain Administration Server (DAS), which is used to manage Java EE Server instances in the domain. The Administration Node at the center of the figure supports the DAS. Applications, resources, and configuration information are stored very close to the DAS. The configuration information managed by the DAS is known as the configuration central repository.
Each domain process must run on a physical host. When running, the domain manifests itself as a DAS. Similarly, every server instance must run on a physical host and requires a Java Virtual Machine. The GlassFish application server must be installed on each machine that runs a server instance.
Administrative Domains
Don't confuse the concepts administrative domain and network domain — the two are not related. In the world of Java EE, domain applies to an administrative domain: the machines and server instances that an administrator controls. |
Each node agent controls the life cycles of the instances that are configured on its machine in a given domain. In general, each life cycle is managed by the DAS according to administrator requests. The DAS delegates the actual life cycle management of each instance to its corresponding node agent. A node agent is a lightweight process that does not itself run Java EE applications.
In addition to controlling instance life cycles, a node agent monitors ("watchdogs") the server instances it is responsible for. If a server instance fails, its node agent brings it back up — without requiring administrator or DAS intervention.
Several administrative clients are shown on the left side of Figure 1. The administrative infrastructure in the DAS is based on Java Management Extensions (JMX) technology. The infrastructure in the DAS follows the instrumentation level of the JAX specification and employs management information in the form of Managed Beans (MBeans), Java objects that represent resources to be managed.
Because the MBeans are compliant with the JMX standard, you can browse them with any remote standard JMX Client (such as JConsole, which is distributed with Java SE 5.0 upwards). The built-in clients shown in Figure 1 use the JMX API to manage the domain. These clients need administrator privileges in order to manage the domain. The following administrative clients are of interest:
- Admin Console – The Admin Console is a
browser-based interface for managing the central repository. The
central repository provides configuration at the DAS level.
- Command-Line Interface – The
asadmin
command duplicates the functionality of the Admin Console. In addition, some actions can only be performed throughasadmin
, such as creating a domain or creating a node agent. You cannot run the Admin Console unless you have a DAS, which presupposes a domain and node agent. Theasadmin
command provides the means to bootstrap the architecture.
- IDE
– The figure shows a snapshot of the JSP (JavaServer Page) editor, part of the
NetBeans IDE. Tools
like the NetBeans IDE can use the DAS to connect
with and manage an application during development. The NetBeans
IDE can also support cluster mode deployment. Most developers
work within a single domain and machine, known as a developer profile.
In the developer profile, the DAS itself acts as
the host of all the applications.
- Sun Provisioning Server – The Sun Provisioning Server is used for installation and provisioning of a DAS on machines that have been primitively configured. For example, consider a large data center into which you introduce a new machine. In that case, you would initialize the machine by installing an operating system, then you would install any necessary software products. After that, you could create a node agent and perhaps a DAS, depending on specific requirements. Finally, you would incorporate the machine into an existing domain by starting the node agent. The Sun Provisioning Server can accomplish all of these things without your having to perform a manual installation on the new machine.
Clustering Architecture
Figure 2. Clustering Architecture Overview
|
At the top of Figure 2, various transports ( HTTP, JMS, RMI-IIOP) are shown communicating with the clustered instances through a load balancing tier. Custom resources, such as enterprise information systems, connect to the load balancer through resource adapters in the Java connector architecture. All of the transports can be load balanced across the cluster, both for scalability and for fault tolerant strategies implemented by redundant units available upon single-point failure.
At the bottom of the figure is a High-Availability Application State Repository, an abstraction of session state storage. The repository stores session state, including HTTP session state, stateful EJB session state, and single sign-on information. This state information can be stored either by means of memory replication or a database.
High-Availability Database Alternative
Sun Microsystems has historically offered a robust high-availability
solution for application servers based on High-Availability Database (HADB)
technology. HADB offers 99.999 percent (“five nines”) availability for
maintaining session-state information. However, its cost to implement and
maintain is relatively high and, although freely available, it has not been
offered in an open-source version.
Requests for a lighter weight, open-source alternative to accompany the open-source GlassFish application server have resulted in a memory replication feature for GlassFish version 2. Memory replication relies on instances within the cluster to store state information for one another in memory, not in a database. The HADB solution remains available, however, and may be preferred for many installations.
Memory Replication in Clusters
Several features are required of a GlassFish-compatible fault-tolerant
system that maintains state information in memory. The system must provide
high availability for HTTP session state, single sign-on state, and
EJB session state. And, it
must be compatible with existing HADB-based architectures.
The memory replication feature takes advantage of the clustering feature of GlassFish to provide most of the advantages of the HADB strategy with much less installation and administrative overhead.
In the GlassFish application server, cluster instances are organized in a ring topology. Each member in the ring sends memory state data to the next member in the ring, its replica partner, and receives state data from the previous member. As state data is updated in any member, it is replicated around the ring. The topology is shown in simplified form in Figure 3.
Figure 3. Clustering Topology
|
The way the topology is formed into a ring is determined by alphanumeric order of the names you give to your instances. So, if you name your instances as shown in Figure 3, Instance 1 will replicate to Instance 2, Instance 2 to Instance 3, and so on around the ring.
A typical cluster topology is shown in Figure 4. In the figure, instances are shown hosted on different physical machines. By placing Instances 1 and 3 on one machine and Instances 2 and 4 on a different machine, you maximize availability. If either machine fails catastrophically, all the data is preserved on the other machine, either in its original form or as replicants of the instances on the failed machine.
Figure 4. Typical Cluster Topology
|
Typical Failover Scenario
Failover requests from a load balancer fall into one of two cases:
-
Case 1: The failover request lands on an instance that is already
storing replication data from the session. In this case, the
instance takes ownership of the session, and processing
continues.
-
Case 2: The failover request lands on an instance without the
required replica data. In this case, the instance
broadcasts a request in the form of a
self-addressed-stamped-envelope (SASE) that requests the
data. The instance with replica data transfers the data
back to the requester and deletes its copy after an
acknowledgment message indicates that the data has been
successfully received. The data exchange is accomplished
through JXTA (Juxtapose) technology.
Figure 5. Typical Cluster Topology with Load Balancer
Click here for a larger image |
Figure 6 illustrates failover Case 1, in which a rerouted server instance has immediate access to session state data. In the figure, Instance 1 has failed, and the load balancer's request for service happens to be routed to Instance 2, which has a replica of the required session state information.
Figure 6. Failover, Case 1
Click here for a larger image |
Figure 7 illustrates failover Case 2, in which the load balancer tier reroutes a session to a server instance that does not have immediate access to session state data. In the figure, Instance 4 recognizes that it does not have the necessary session state data and broadcasts a SASE to other instances in the cluster, requesting the data. The request is illustrated with yellow arrows.
Figure 7. Failover, Case 2
Click here for a larger image |
One of the instances (Instance 2 in Figure 7) recognizes that its replica contains the required data, and replies to the SASE request. Instance 2 transfers the session data to Instance 4, which then services the session.
Whenever an instance uses replica data to service a session (both Case 1 and Case 2), the replica data is first tested to make sure it is the current version.
Cluster Dynamic Shape Change
In our example, because Instance 1 has failed, the topology of the cluster must change to maintain session cache replication. In Figure 8, Instance 2 and Instance 4 learn that Instance 1 has disappeared. Because Instance 1 has failed, attempts to communicate with it fail with I/O exceptions. If an instance is taken down deliberately, JXTA technology sends messages that pipes to Instance 1 have been closed.
Figure 8. Cluster Discovers Failed Instance
Click here for a larger image |
In response to the disappearance of Instance 1, Instance 4 selects a new replication partner, as shown in Figure 9. Instance 4 cleans up its old connections and establishes connection to Instance 2. The cluster has now shrunk from 4 to 3 server instances.
Figure 9. :Cluster Dynamic Shape Change
Click here for a larger image |
Note that each instance in the smaller cluster now does more work given the same amount of overall session activity. For resource planning, recognize that in-memory replication uses heap memory. To provide high availability, ensure that you have sufficient memory headroom for each instance in the event that the cluster must shrink.
When an instance joins (or rejoins) the cluster, the process essentially occurs in reverse. When a new instance in the cluster receives a request from the load balancer tier, the instance broadcasts a request for a replication partner, selects one, and the topology adjusts automatically to embrace the new instance.
Group Management Service
GMS manages cluster shape change events in GlassFish, coordinating such events as members joining, members shutting down gracefully, or members failing. Through GMS, memory Replication takes necessary action in response to these events and provides continuous availability of service.
GMS is used in GlassFish Application Server to monitor cluster health and supports the memory replication module.
In summary, GMS provides support for the following:
-
Cluster membership change notifications and cluster state
-
Clusterwide or member-to-member messaging
-
Recovery-oriented computing, including recovery member
selection, failure fencing, and recovery chaining in case of
multiple failures
-
Distributed cache, a lightweight implementation suitable for
exchanging messages about cluster membership
-
A service-provider interface (SPI) for plugging in group communication providers; the
default provider is based on JXTA technology
- Timer migrations – GMS selects an instance to pick up the timers of a failed instance if necessary
Memory Replication Configuration
-
Create an administrative domain.
After the domain has been created, along with its Node Agents on the
machines hosting the cluster, then a cluster administrative profile is
created. The profile sets defaults
for replication, enables GMS, and sets
the persistence-type property to
replicated
.
-
Create a cluster and its instances, as described later in this article.
-
Deploy your web applications with the availability-enabled
property set to
true
.
Some additional tuning may be required. For example, the default heap size for the cluster admin profile is 512 MB. For an enterprise deployment, this value should be increased to 1 GB or more. This is easily accomplished through the domain admin server by setting JVM options with the following tags:
|
You also need to be sure to add the
tag to
your web application's web.xml
file. This tag identifies
the application as being cluster-capable.
The requirement to insert the
tag is a
reminder to test your application in a cluster environment before deploying
it to a cluster. Some applications work well when deployed to a single
instance but fail when deployed to a cluster. For example, before an
application can be successfully deployed in a cluster, any objects, such as
stateful session beans, that become part of the application's HTTP session
must be serializable so that their states can be preserved across a
network. Nonserializable objects may work when deployed to a single server
instance but will fail in a cluster environment. Examine what goes into
your session data to ensure that it will work correctly in a distributed
environment.
Memory Replication Implementation
JXTA technology is familiar to many as a peer-to-peer technology. It is defined as a set of XML-based protocols that allow devices connected to a network to exchange messages and collaborate regardless of the network topology. In developing GlassFish version 2 Application Server, JXTA technology was streamlined to handle the high volume and throughput requirements of memory replication. To improve scalability and performance, developers of the memory replication feature also benefited from collaboration with the Grizzly Project, which helps developers build scalable, robust servers with the Java New I/O API (NIO).
Group membership abstractions in JXTA technology map well to the GlassFish Application Server cluster and instances model: JXTA groups map to GlassFish clusters and JXTA peers map to GlassFish server instances. GMS takes advantage of these group membership abstractions and provides consuming components such as memory replication, a notification event model for runtime events in the cluster.
In development of GlassFish version 2 Application Server, clustering topologies have been limited to a single subnet. Future plans include leveraging JXTA to include geographic dispersal of clustering topologies.
Finally, the straightforward APIs of JXTA technology made possible the very simple configuration requirements for GlassFish clustering.
Application Server Installation
- Type the following command:
java -jar filename.jar
For example:
java -jar glassfish-installer-v2-b58g.jar
-
Accept the license agreement. After you accept the license, the files unpack in the GlassFish installation directory, by
default named
glassfish
.
Clustering Configuration
ant
build scripts, which
you can use to create default domains. The two scripts are
setup.xml
and setup-cluster.xml
.
The
setup.xml
script creates the developer profile;
the setup-cluster.xml
script creates a cluster
profile. You can convert a developer profile into a cluster profile through
the Sun Java System Application Server Admin Console, as described below.
To create a default domain with a clustering profile:
-
Type the following command in the GlassFish installation
directory:
lib/ant/bin/ant -f setup-cluster.xml
The configuration script unpacks the archives and creates adomains
subdirectory and a clustering-enabled domain nameddomain1
.
Domain Examination
asadmin
command) or the GUI (the Sun Java System Application Server Admin Console).
Examining Domains From the Command-Line Interface
The configuration step created a
domains
subdirectory
in the installation directory. This directory stores all
the GlassFish domains.
You can interact with domains from the CLI with the
asadmin
command, located in the bin
subdirectory beneath the installation
directory. The asadmin
command can be used in batch or
interactive mode.
For example, you can list all domains and their statuses with the following command:
bin/asadmin list-domains |
If you haven't started domain1 yet, the above command issues the following output:
domain1 not running |
To start domain1, type the following command:
bin/asadmin start-domain domain1 |
The argument
domain1
is optional if only one domain exists.
The command starts domain1 and provides information about the location of
the log file, the version of the server, the domain name, the available web
contexts, the applications that are deployed, the ports being used, and so
on.
Examining Domains With the Sun Java System Application Server Admin Console
As an alternative to the
asadmin
command, you can use the
Sun Java System Application Server Admin Console to
control the Application Server. The next section describes how to start the
console.
The Admin Console makes it easy to deploy applications from
.war
or .ear
files, or even JBI (Java Business
Integration) service assemblies. From the console, you can monitor
resource use, search log files, start and stop the server, access on-line
help, and perform many other administrative and server management
functions.
Cluster Support for an Existing Domain
lib/ant/bin/ant -f setup.xml |
To enable clustering from a developer profile domain:
- From the GlassFish installation directory, start the domain that you want to reconfigure for clustering
by typing the following command:
bin/asadmin start-domain domain_name
For example:
bin/asadmin start-domain domain1
The command starts the GlassFish application server in the domain and provides information in the command shell window. The last line of information describes the capabilities of the domain; in this case:
Domain does not support application server clusters and other standalone instances.
- Start the Administration Console by directing your web browser to the following URL:
http://hostname:port
The default port is 4848. For example:
http://kindness.sun.com:4848
If the Administration Console is running on the machine on which the Application Server was installed, specifylocalhost
for the host name. On Windows, start the Application Server Administration Console from the Start menu.
The default login is
User Name:admin
Password:adminadmin
-
In the Common Tasks tree at the left side of the window, select
Application Server. On the right side of the window, select the General
tab.
-
Click the Add Cluster Support button, as shown in the following figure.
Figure 10. :Adding Cluster Support
Click here for a larger image
-
A confirmation page is displayed to alert you to the consequences of the
change to clustering support. Among the things to consider:
- The configuration of the domain is changed to support clusters. The
change includes addition of a few system properties and template
configuration.
- The clustering-enabled server will support both clusters and standalone server instances.
- Because a cluster often increases demands on resources, you may want
to modify the administration server's JVM settings, such as heap-size.
- All applications currently deployed to the server in the clustering-enabled domain will continue to work.
- The change to clustering support takes effect after you restart the domain server and the
asadmin
CLI.
- You may want to back up the
domain.xml
file for the domain before proceeding, in case you want to roll back cluster support.
- The configuration of the domain is changed to support clusters. The
change includes addition of a few system properties and template
configuration.
- Click OK to enable clustering support for the domain. A page opens to alert you to restart the server instance for the domain.
- Click the Stop Instance button.
- If the
asadmin
command is running in your command shell, quit the command by typingquit
at theasadmin>
prompt.
- Restart the domain from the CLI by typing the following command:
asadmin start-domain domain_name
for example,
asadmin start-domain domain1
If you have successfully enabled clustering for this domain, the final line of output in the command shell will read as follows:
Domain supports application server clusters and other standalone instances
HTTP Load Balancer Plug-In
Load balancers can be implemented in software or hardware. Refer to information supplied by hardware vendors for details about implementing their devices.
An HTTP load balancer plug-in is available for GlassFish version 2 Application Server. The plug-in works with Sun Java System Application Server 9.1 as well as Apache Web Server and Microsoft IIS. The load balancer also enables requests to fail over from one server instance to another, contributing to high-availability installations.
For more information about how to set up the load balancer plug-in, refer to the online help available from the Sun Java System Application Server 9.1 Admin Console. For more detailed information, see Chapter 5, Configuring HTTP Load Balancing, in Sun Java System Application Server 9.1 High Availability Administration Guide.
Conclusion
- High availability - Multiple server instances, capable of
sharing state, minimize single points of failure,
particularly when combined with load balancing schemes. In-memory
replication of server session data minimizes disruption for users when a
server instance fails.
- Horizontal scalability - As user load increases, additional machines, server instances, and clusters can be added and easily configured to handle the increasing load. GMS eases the administrative burden of maintaining a high-availability cluster.
Acknowledgments
References
- Sun Java System Application Server 9.1 High Availability Administration Guide
- Sun Java System Application Server 9.1 Collection of Guides
- Download Page for GlassFish Community
- The Aquarium — GlassFish Community Wiki
- Kedar Mhaswade's Blog — adding clustering support to a domain
- Prashanth Abbagani's Blog — setting up load balancing and clustering in GlassFish version 2
- Sun Java System Application Server page — with links to the GlassFish Community site
- Sun Java System Application Server Support and Services
- Sun Java System Application Server 9.1 Support
- Java EE Learning Path — Training Course Catalog
- Sun Developer Expert Assistance
- Java Training and Certification — professional development to enhance your skills
- Developer Support — online incident-based programming advice, telephone product support, developer training courses, service plans
No comments:
Post a Comment