As mobile devices have become more capable and applications are written to tackle a wider array of functionality, there is a need for the ability to run applications concurrently. Early ME profile specifications did not prohibit running applications concurrently. However, they were not explicit in detailing the expected behavior of applications running concurrently. The previous specifications did not provide the necessary mechanisms to allow applications to detect and handle state changes.
The support for the concurrent execution of two or more applications is optional in MEEP 8. For those implementations that do, the specification define the expected behaviors of basic concurrency issues so that implementations behave as consistently as possible without dictating implementation decisions.
The primary usability rule for defining concurrency behavior is that the applications in question should behave as though they were running independently. In order to achieve this, data sets between applications should be isolated, resource contention should be handled as transparently as possible, and non-fatal errors should be quietly handled.
If a MEEP 8 implementation includes a bitmap-oriented user interface API (not being specified within the scope of this implementation) and this API supports a focus concept, the behavior of execution of an application currently not being in focus is defined as follows.
A platform which is capable of supporting the behavior where an application not in focus runs simultaneously with the one in focus is also capable of supporting the model where the application not in focus is paused. However, the reverse is not true. Therefore, in order to standardize the general behavior of concurrency across platforms, implementations must provide the ability to run applications simultaneously. It will be up to each application to use the mechanisms provided to determine its proper behavior when not in focus.
In order to achieve an environment where each application behaves in the same manner regardless of what other applications are running, it must be ensured that the data accessible by each application must be isolated from the data visible from any other application from a different scope.
Different applications may also contain classes that have the same name but are different classes. Therefore the class space for each application context must also be separated. For instance, in Figure 5-11 below, Application 1 contains a class named “B”. Application 2 also has a class named “B”. Since Application 1 and Application 2 are in different contexts, they might refer to two completely different classes named “B”. Therefore, Application 1 would need to load the class it required into its class space while Application 2 would load a different version.
Figure 5-1 : Data Isolation Between Concurrent Applications
Every application, regardless of the value set in the
attribute, MUST be launched within its own execution environment.
Classes and static data MUST NOT be shared among applications, even those
within the same application suite. However, all other factors which
currently make up the definition
of an application suite such as permissions, RMS access, and resources
remain per suite.
In this model, although the classes loaded by applications in the same suite are identical because they are from the same JAR, the loading of the class and maintaining of the class state is handled on a per application basis, not per application suite. This includes the static variable data for all classes. For instance, if a system class A has a static variable x, changing the value of x in application 1 would not affect the value of x in application 2, even if the applications were in the same suite. Data between application suites is maintained separately per suite.
Figure 5-2 : Concurrent Application Class and Data Separation
Associating an execution environment to an application rather than an application suite is more consistent with the application models found in other Java Platforms.
There is no specified resource contention policy between concurrently executing applications. Implementations are free to implement any policy that best suits the platform. For instance, a platform could have a policy where a particular native resource, such as sound channels, are virtualized among running applications, or could instantiate a policy where the resources are allocated on a first-come-first-served basis.
When only running a single application, there is much more
flexibility in the handling of errors than when running multiple
applications simultaneously. In the case of a single application, most
errors can be handled by the shutdown or re-initialization of the
virtual machine. However, when multiple applications are being executed
simultaneously, an error affecting one application may not require that the
second application be stopped. For instance, a
NoClassDefFoundError may be
fatal for the application involved but should not affect any other application
which is also running.
The fine-grain details and algorithms used in determining thread and application scheduling will always be a platform-specific implementation detail. Due to the differences in implementations and platforms, it is impractical to mandate a specific policy for all devices. However, the individual policies used on all devices should share some similar characteristics.
Implementations SHOULD implement a scheduling policy which avoids starvation. No application running should ever be completely starved of execution time regardless of priority level or visibility.
The AMS MUST NOT launch a second instance of an application. If the AMS
should receive a request to launch an application that is already running,
usually from an event, the AMS MUST instead post a relaunch system event
to the application. The relaunch system event name is unique to the target
application and is defined by
APPLICATION_RELAUNCH_PREFIX. The event value MUST be
Applications that intend to handle a second invocation request should register an
for the concatenated system event name.
Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.