GET A Create Form For This File Store Collection

get

/management/weblogic/{version}/edit/partitions/{name}/resourceGroups/{name}/fileStoreCreateForm

This resource returns a pre-populated file store model that can be customized then posted (using the POST method) to the fileStores collection resource to create a new file store.

Request

Path Parameters
Query Parameters
  • The 'excludeFields' query parameter is used to restrict which fields are returned in the response. It is a comma separated list of field names. If present, only fields whose name is not on the list will be returned. If not present, all fields are returned (unless the 'fields' query parameter is specified). Note: 'fields' must not be specified if 'excludeFields' is specified.
  • The 'fields' query parameter is used to restrict which fields are returned in the response. It is a comma separated list of field names. If present, only fields with matching names are returned. If not present, all fields are returned (unless the 'excludeFields' query parameter is specified). Note: 'excludeFields' must not be specified if 'fields' is specified.
Security
Back to Top

Response

Supported Media Types

200 Response

Returns this file store.

This method can return the following links:

  • rel=create uri=/management/weblogic/{version}/edit/partitions/{name}/resourceGroups/{name}/fileStores

    The collection resource for this create form resource.

Body ()
Root Schema : File Store
Type: object
Show Source
  • Minimum Value: -1
    Maximum Value: 8192
    Default Value: -1

    The smallest addressable block, in bytes, of a file. When a native wlfileio driver is available and the block size has not been configured by the user, the store selects the minimum OS specific value for unbuffered (direct) I/O, if it is within the range [512, 8192].

    A file store's block size does not change once the file store creates its files. Changes to block size only take effect for new file stores or after the current files have been deleted. See "Tuning the Persistent Store" in Tuning Performance of Oracle WebLogic Server

    Constraints

    • not visible for domain scoped mbeans
  • Default Value: oracle.doceng.json.BetterJsonNull@3bc53bf0

    The location of the cache directory for Direct-Write-With-Cache, ignored for other policies.

    When Direct-Write-With-Cache is specified as the SynchronousWritePolicy, cache files are created in addition to primary files (see Directory for the location of primary files). If a cache directory location is specified, the cache file path is CacheDirectory/WLStoreCache/StoreNameFileNum.DAT.cache. When specified, Oracle recommends using absolute paths, but if the directory location is a relative path, then CacheDirectory is created relative to the WebLogic Server instance's home directory. If "" or Null is specified, the Cache Directory is located in the current operating system temp directory as determined by the java.io.tmpdir Java System property (JDK's default: /tmp on UNIX, %TEMP% on Windows) and is TempDirectory/WLStoreCache/DomainNameunique-idStoreNameFileNum.DAT.cache. The value of java.io.tmpdir varies between operating systems and configurations, and can be overridden by passing -Djava.io.tmpdir=My_path on the JVM command line.

    Considerations:

    • Security: Some users may want to set specific directory permissions to limit access to the cache directory, especially if there are custom configured user access limitations on the primary directory. For a complete guide to WebLogic security, see "Securing a Production Environment for Oracle WebLogic Server."

    • Additional Disk Space Usage: Cache files consume the same amount of disk space as the primary store files that they mirror. See Directory for the location of primary store files.

    • Performance: For the best performance, a cache directory should be located in local storage instead of NAS/SAN (remote) storage, preferably in the operating system's temp directory. Relative paths should be avoided, as relative paths are located based on the domain installation, which is typically on remote storage. It is safe to delete a cache directory while the store is not running, but this may slow down the next store boot.

    • Preventing Corruption and File Locking: Two same named stores must not be configured to share the same primary or cache directory. There are store file locking checks that are designed to detect such conflicts and prevent corruption by failing the store boot, but it is not recommended to depend on the file locking feature for correctness. See Enable File Locking

    • Boot Recovery: Cache files are reused to speed up the File Store boot and recovery process, but only if the store's host WebLogic Server instance has been shut down cleanly prior to the current boot. For example, cache files are not re-used and are instead fully recreated: after a kill -9, after an OS or JVM crash, or after an off-line change to the primary files, such as a store admin compaction. When cache files are recreated, a Warning log message 280102 is generated.

    • Fail-Over/Migration Recovery: A file store safely recovers its data without its cache directory. Therefore, a cache directory does not need to be copied or otherwise made accessible after a fail-over or migration, and similarly does not need to be placed in NAS/SAN storage. A Warning log message 280102, which is generated to indicate the need to recreate the cache on the new host system, can be ignored.

    • Cache File Cleanup: To prevent unused cache files from consuming disk space, test and developer environments should periodically delete cache files.

    Constraints

    • legal null
    • not visible for domain scoped mbeans
  • Minimum Value: 0
    Maximum Value: 2147483647
    Default Value: 1000

    A priority that the server uses to determine when it deploys an item. The priority is relative to other deployable items of the same type.

    For example, the server prioritizes and deploys all EJBs before it prioritizes and deploys startup classes.

    Items with the lowest Deployment Order value are deployed first. There is no guarantee on the order of deployments with equal Deployment Order values. There is no guarantee of ordering across clusters.

    Constraints

    • not visible for domain scoped mbeans
  • Default Value: oracle.doceng.json.BetterJsonNull@3932263f

    The path name to the file system directory where the file store maintains its data files.

    • When targeting a file store to a migratable target, the store directory must be accessible from all candidate server members in the migratable target.

    • For highest availability, use either a SAN (Storage Area Network) or other reliable shared storage.

    • Use of NFS mounts is discouraged, but supported. Most NFS mounts are not transactionally safe by default, and, to ensure transactional correctness, need to be configured using your NFS vendor documentation in order to honor synchronous write requests.

    • For SynchronousWritePolicy of Direct-Write-With-Cache, see Cache Directory.

    • Additional O/S tuning may be required if the directory is hosted by Microsoft Windows, see Synchronous Write Policy for details.

    Constraints

    • legal null
    • not visible for domain scoped mbeans
  • Default Value: Distributed
    Allowed Values: [ "Distributed", "Singleton" ]

    Specifies how the instances of a configured JMS artifact are named and distributed when cluster-targeted. A JMS artifact is cluster-targeted when its target is directly set to a cluster, or when it is scoped to a resource group and the resource group is in turn targeted to a cluster. When this setting is configured on a store, it applies to all JMS artifacts that reference the store. Valid options:

    • Distributed Creates an instance on each server JVM in a cluster. Required for all SAF agents and for cluster-targeted or resource-group-scoped JMS servers that host distributed destinations.

    • Singleton Creates a single instance on a single server JVM within a cluster. Required for cluster-targeted or resource-group-scoped JMS servers that host standalone (non-distributed) destinations and for cluster-targeted or resource-group-scoped path services. The Migration Policy must be On-Failure or Always when using this option with a JMS server, On-Failure when using this option with a messaging bridge, and Always when using this option with a path service.

    Instance Naming Note:

    • The DistributionPolicy determines the instance name suffix for cluster-targeted JMS artifacts. The suffix for a cluster-targeted Singleton is -01 and for a cluster-targeted Distributed is @ClusterMemberName.

    Messaging Bridge Notes:

    • When an instance per server is desired for a cluster-targeted messaging bridge, Oracle recommends setting the bridge Distributed Policy and Migration Policy to Distributed/Off, respectively; these are the defaults.

    • When a single instance per cluster is desired for a cluster-targeted bridge, Oracle recommends setting the bridge Distributed Policy and Migration Policy to Singleton/On-Failure, respectively.

    • If you cannot cluster-target a bridge and still need singleton behavior in a configured cluster, you can target the bridge to a migratable target and configure the Migration Policy on the migratable target to Exactly-Once.

    Constraints

    • not visible for domain scoped mbeans
  • Read Only: true
    Default Value: false

    Return whether the MBean was created dynamically or is persisted to config.xml

    Constraints

    • not visible for domain scoped mbeans
  • Default Value: -1

    Specifies the amount of time, in seconds, to delay before failing a cluster-targeted JMS artifact instance back to its preferred server after the preferred server failed and was restarted.

    This delay allows time for the system to stabilize and dependent services to be restarted, preventing a system failure during a reboot.

    • A value > specifies the time, in seconds, to delay before failing a JMS artifact back to its user preferred server.

    • A value of indicates that the instance would never failback.

    • A value of -1 indicates that there is no delay and the instance would failback immediately.

    Note: This setting only applies when the JMS artifact is cluster-targeted and the Migration Policy is set to On-Failure or Always

    Constraints

    • not visible for domain scoped mbeans
  • Minimum Value: -1
    Default Value: -1

    Specify a limit for the number of cluster-targeted JMS artifact instances that can fail over to a particular JVM.

    This can be used to prevent too many instances from starting on a server, avoiding a system failure when starting too few servers of a formerly large cluster.

    A typical limit value should allow all instances to run in the smallest desired cluster size, which means (smallest-cluster-size * (limit + 1)) should equal or exceed the total number of instances.

    • A value of -1 means there is no fail over limit (unlimited).

    • A value of prevents any fail overs of cluster-targeted JMS artifact instances, so no more than 1 instance will run per server (this is an instance that has not failed over).

    • A value of allows one fail-over instance on each server, so no more than two instances will run per server (one failed over instance plus an instance that has not failed over).

    Note: This setting only applies when the JMS artifact is cluster-targeted and the Migration Policy is set to On-Failure or Always

    Constraints

    • not visible for domain scoped mbeans
  • Default Value: true

    Determines whether OS file locking is used.

    When file locking protection is enabled, a store boot fails if another store instance already has opened the store files. Do not disable this setting unless you have procedures in place to prevent multiple store instances from opening the same file. File locking is not required but helps prevent corruption in the event that two same-named file store instances attempt to operate in the same directories. This setting applies to both primary and cache files.

    Constraints

    • not visible for domain scoped mbeans
  • Read Only: true

    Return the unique id of this MBean instance

    Constraints

    • not visible for domain scoped mbeans
  • Default Value: 60

    Specifies the amount of time, in seconds, to delay before starting a cluster-targeted JMS instance on a newly booted WebLogic Server instance. When this setting is configured on a store, it applies to all JMS artifacts that reference the store.

    This allows time for the system to stabilize and dependent services to be restarted, preventing a system failure during a reboot.

    • A value > is the time, in seconds, to delay before before loading resources after a failure and restart.

    • A value of specifies no delay.

    Note: This setting only applies when the JMS artifact is cluster-targeted and the Migration Policy is set to On-Failure or Always

    Constraints

    • not visible for domain scoped mbeans
  • Minimum Value: 0
    Default Value: 0

    The initial file size, in bytes.

    • Set InitialSize to pre-allocate file space during a file store boot. If InitialSize exceeds MaxFileSize, a store creates multiple files (number of files = InitialSizeMaxFileSize rounded up).

    • A file store automatically reuses the space from deleted records and automatically expands a file if there is not enough space for a new write request.

    • Use InitialSize to limit or prevent file expansions during runtime, as file expansion introduces temporary latencies that may be noticeable under rare circumstances.

    • Changes to initial size only take effect for new file stores, or after any current files have been deleted prior to restart.

    • See Maximum File Size

    Constraints

    • not visible for domain scoped mbeans
  • Minimum Value: -1
    Maximum Value: 67108864
    Default Value: -1

    The I/O buffer size, in bytes, automatically rounded down to the nearest power of 2.

    • For the Direct-Write-With-Cache policy when a native wlfileio driver is available, IOBufferSize describes the maximum portion of a cache view that is passed to a system call. This portion does not consume off-heap (native) or Java heap memory.

    • For the Direct-Write and Cache-Flush policies, IOBufferSize is the size of a per store buffer which consumes off-heap (native) memory, where one buffer is allocated during run-time, but multiple buffers may be temporarily created during boot recovery.

    • When a native wlfileio driver is not available, the setting applies to off-heap (native) memory for all policies (including Disabled).

    • For the best runtime performance, Oracle recommends setting IOBufferSize so that it is larger than the largest write (multiple concurrent store requests may be combined into a single write).

    • For the best boot recovery time performance of large stores, Oracle recommends setting IOBufferSize to at least 2 megabytes.

    See AllocatedIOBufferBytes to find out the actual allocated off-heap (native) memory amount. It is a multiple of IOBufferSize for the Direct-Write and Cache-Flush policies, or zero.

    Constraints

    • not visible for domain scoped mbeans
  • Default Value: oracle.doceng.json.BetterJsonNull@1eb00da3

    The name used by subsystems to refer to different stores on different servers using the same name.

    For example, an EJB that uses the timer service may refer to its store using the logical name, and this name may be valid on multiple servers in the same cluster, even if each server has a store with a different physical name.

    Multiple stores in the same domain or the same cluster may share the same logical name. However, a given logical name may not be assigned to more than one store on the same server.

    Constraints

    • legal null
    • not visible for domain scoped mbeans
  • Minimum Value: 1048576
    Maximum Value: 2139095040
    Default Value: 1342177280

    The maximum file size, in bytes, of an individual data file.

    • The MaxFileSize value affects the number of files needed to accommodate a store of a particular size (number of files = store size/MaxFileSize rounded up).

    • A file store automatically reuses space freed by deleted records and automatically expands individual files up to MaxFileSize if there is not enough space for a new record. If there is no space left in exiting files for a new record, a store creates an additional file.

    • A small number of larger files is normally preferred over a large number of smaller files as each file allocates Window Buffer and file handles.

    • If MaxFileSize is larger than 2^24 * BlockSize, then MaxFileSize is ignored, and the value becomes 2^24 * BlockSize. The default BlockSize is 512, and 2^24 * 512 is 8 GB.

    • The minimum size for MaxFileSize is 10 MB when multiple data files are used by the store. If InitialSize is less than MaxFileSize then a single file will be created of InitialSize bytes. If InitialSize is larger than MaxFileSize then (InitialSize / MaxFileSize) files will be created of MaxFileSize bytes and an additional file if necessary to contain any remainder.

    • See Initial Size

    Constraints

    • not visible for domain scoped mbeans
  • Minimum Value: -1
    Maximum Value: 1073741824
    Default Value: -1

    The maximum amount of data, in bytes and rounded down to the nearest power of 2, mapped into the JVM's address space per primary store file. Applies to synchronous write policies Direct-Write-With-Cache and Disabled but only when the native wlfileio library is loaded.

    A window buffer does not consume Java heap memory, but does consume off-heap (native) memory. If the store is unable to allocate the requested buffer size, it allocates smaller and smaller buffers until it reaches MinWindowBufferSize, and then fails if cannot honor MinWindowBufferSize

    Oracle recommends setting the max window buffer size to more than double the size of the largest write (multiple concurrently updated records may be combined into a single write), and greater than or equal to the file size, unless there are other constraints. 32-bit JVMs may impose a total limit of between 2 and 4GB for combined Java heap plus off-heap (native) memory usage.

    • See store attribute AllocatedWindowBufferBytes to find out the actual allocated Window Buffer Size.

    • See Maximum File Size and Minimum Window Buffer Size

    Constraints

    • not visible for domain scoped mbeans
  • Default Value: Off
    Allowed Values: [ "Off", "On-Failure", "Always" ]

    Controls migration and restart behavior of cluster-targeted JMS service artifact instances. When this setting is configured on a cluster-targeted store, it applies to all JMS artifacts that reference the store. See the migratable target settings for enabling migration and restart on migratable-targeted JMS artifacts.

    • Off Disables migration support for cluster-targeted JMS service objects, and changes the default for Restart In Place to false. If you want a restart to be enabled when the Migration Policy is Off, then Restart In Place must be explicitly configured to true. This policy cannot be combined with the Singleton Migration Policy.

    • On-Failure Enables automatic migration and restart of instances on the failure of a subsystem Service or WebLogic Server instance, including automatic fail-back and load balancing of instances.

    • Always Provides the same behavior as On-Failure and automatically migrates instances even in the event of a graceful shutdown or a partial cluster start.

    Note: Cluster leasing must be configured for On-Failure and Always.

    Messaging Bridge Notes:

    • When an instance per server is desired for a cluster-targeted messaging bridge, Oracle recommends setting the bridge Distributed Policy and Migration Policy to Distributed/Off, respectively; these are the defaults.

    • When a single instance per cluster is desired for a cluster-targeted bridge, Oracle recommends setting the bridge Distributed Policy and Migration Policy to Singleton/On-Failure, respectively.

    • A Migration Policy of Always is not recommended for bridges.

    • If you cannot cluster-target a bridge and still need singleton behavior in a configured cluster, you can target the bridge to a migratable target and configure the Migration Policy on the migratable target to Exactly-Once.

    Constraints

    • not visible for domain scoped mbeans
  • Minimum Value: -1
    Maximum Value: 1073741824
    Default Value: -1

    The minimum amount of data, in bytes and rounded down to the nearest power of 2, mapped into the JVM's address space per primary store file. Applies to synchronous write policies Direct-Write-With-Cache and Disabled, but only when a native wlfileio library is loaded. See Maximum Window Buffer Size

    Constraints

    • not visible for domain scoped mbeans
  • Read Only: true

    The user-specified name of this MBean instance.

    This name is included as one of the key properties in the MBean's javax.management.ObjectName

    Name=user-specified-name

    Constraints

    • legal null
  • Optional information that you can include to describe this configuration.

    WebLogic Server saves this note in the domain's configuration file (config.xml) as XML PCDATA. All left angle brackets (<) are converted to the xml entity <. Carriage returns/line feeds are preserved.

    Note: If you create or edit a note from the Administration Console, the Administration Console does not preserve carriage returns/line feeds.

  • Minimum Value: -1
    Default Value: 6

    Specifies the maximum number of restart attempts.

    • A value > specifies the maximum number of restart attempts.

    • A value of specifies the same behavior as setting getRestartInPlace to false

    • A value of -1 means infinite retry restart until it either starts or the server instance shuts down.

    Constraints

    • not visible for domain scoped mbeans
  • Default Value: 240

    Specifies the amount of time, in seconds, to delay before a partially started cluster starts all cluster-targeted JMS artifact instances that are configured with a Migration Policy of Always or On-Failure.

    Before this timeout expires or all servers are running, a cluster starts a subset of such instances based on the total number of servers running and the configured cluster size. Once the timeout expires or all servers have started, the system considers the cluster stable and starts any remaining services.

    This delay ensures that services are balanced across a cluster even if the servers are started sequentially. It is ignored after a cluster is fully started (stable) or when individual servers are started.

    • A value > specifies the time, in seconds, to delay before a partially started cluster starts dynamically configured services.

    • A value of specifies no delay.

    Constraints

    • not visible for domain scoped mbeans
  • Enables a periodic automatic in-place restart of failed cluster-targeted or standalone-server-targeted JMS artifact instance(s) running on healthy WebLogic Server instances. See the migratable target settings for in-place restarts of migratable-targeted JMS artifacts. When the Restart In Place setting is configured on a store, it applies to all JMS artifacts that reference the store.

    • If the Migration Policy of the JMS artifact is set to Off, Restart In Place is disabled by default.

    • If the Migration Policy of the JMS artifact is set to On-Failure or Always, Restart In Place is enabled by default.

    • This attribute is not used by WebLogic messaging bridges which automatically restart internal connections as needed.

    • For a JMS artifact that is cluster-targeted and the Migration Policy is set to On-Failure or Always, if restart fails after the configured maximum retry attempts, it will migrate to a different server within the cluster.

    Constraints

    • not visible for domain scoped mbeans
  • Minimum Value: 1
    Default Value: 30

    Specifies the amount of time, in seconds, to wait in between attempts to restart a failed service instance.

    Constraints

    • not visible for domain scoped mbeans
  • Default Value: Direct-Write
    Allowed Values: [ "Disabled", "Cache-Flush", "Direct-Write", "Direct-Write-With-Cache" ]

    The disk write policy that determines how the file store writes data to disk.

    This policy also affects the JMS file store's performance, scalability, and reliability. Oracle recommends Direct-Write-With-Cache which tends to have the highest performance. The default value is Direct-Write. The valid policy options are:

    • Direct-Write Direct I/O is supported on all platforms. When available, file stores in direct I/O mode automatically load the native I/O wlfileio driver. This option tends to out-perform Cache-Flush and tend to be slower than Direct-Write-With-Cache. This mode does not require a native store wlfileio driver, but performs faster when they are available.

    • Direct-Write-With-Cache Store records are written synchronously to primary files in the directory specified by the Directory attribute and asynchronously to a corresponding cache file in the Cache Directory. The Cache Directory provides information about disk space, locking, security, and performance implications. This mode requires a native store wlfileiocode driver. If the native driver cannot be loaded, then the write mode automatically switches to Direct-Write. See Cache Directory

    • Cache-Flush Transactions cannot complete until all of their writes have been flushed down to disk. This policy is reliable and scales well as the number of simultaneous users increases.Transactionally safe but tends to be a lower performer than direct-write policies.

    • Disabled Transactions are complete as soon as their writes are cached in memory, instead of waiting for the writes to successfully reach the disk. This is the fastest policy because write requests do not block waiting to be synchronized to disk, but, unlike other policies, is not transactionally safe in the event of operating system or hardware failures. Such failures can lead to duplicate or lost data/messages. This option does not require native store wlfileio drivers, but may run faster when they are available. Some non-WebLogic JMS vendors default to a policy that is equivalent to Disabled

    Notes:

    • When available, file stores load WebLogic wlfileio native drivers, which can improve performance. These drivers are included with Windows, Solaris, Linux, and AIX WebLogic installations.

    • Certain older versions of Microsoft Windows may incorrectly report storage device synchronous write completion if the Windows default Write Cache Enabled setting is used. This violates the transactional semantics of transactional products (not specific to Oracle), including file stores configured with a Direct-Write (default) or Direct-Write-With-Cache policy, as a system crash or power failure can lead to a loss or a duplication of records/messages. One of the visible symptoms is that this problem may manifest itself in high persistent message/transaction throughput exceeding the physical capabilities of your storage device. You can address the problem by applying a Microsoft supplied patch, disabling the Windows Write Cache Enabled setting, or by using a power-protected storage device. See http://support.microsoft.com/kb/281672 and http://support.microsoft.com/kb/332023.

    • NFS storage note: On some operating systems, native driver memory-mapping is incompatible with NFS when files are locked. Stores with synchronous write policies Direct-Write-With-Cache or Disabled, and WebLogic JMS paging stores enhance performance by using the native wlfileio driver to perform memory-map operating system calls. When a store detects an incompatibility between NFS, file locking, and memory mapping, it automatically downgrades to conventional read/write system calls instead of memory mapping. For best performance, Oracle recommends investigating alternative NFS client drivers, configuring a non-NFS storage location, or in controlled environments and at your own risk, disabling the file locks (See Enable File Locking). For more information, see "Tuning the WebLogic Persistent Store" in Tuning Performance of Oracle WebLogic Server

    Constraints

    • not visible for domain scoped mbeans
  • Items
    Title: Items

    Return all tags on this Configuration MBean

    Constraints

    • not visible for domain scoped mbeans
  • Target References
    Title: Target References
    Contains the array of target references.

    The server instances, clusters, or migratable targets defined in the current domain that are candidates for hosting a file store, JDBC store, or replicated store. If scoped to a Resource Group or Resource Group Template, the target is inherited from the Virtual Target.

    When selecting a cluster, the store must be targeted to the same cluster as the JMS server. When selecting a migratable target, the store must be targeted it to the same migratable target as the migratable JMS server or SAF agent. As a best practice, a path service should use its own custom store and share the same target as the store.

    Constraints

    • not visible for domain scoped mbeans
  • Read Only: true

    Returns the type of the MBean.

    Constraints

    • unharvestable
  • Read Only: true
    Default Value: oracle.doceng.json.BetterJsonNull@462fc8ba

    Overrides the name of the XAResource that this store registers with JTA.

    You should not normally set this attribute. Its purpose is to allow the name of the XAResource to be overridden when a store has been upgraded from an older release and the store contained prepared transactions. The generated name should be used in all other cases.

    Constraints

    • legal null
    • not visible for domain scoped mbeans
Nested Schema : Items
Type: array
Title: Items

Return all tags on this Configuration MBean

Constraints

  • not visible for domain scoped mbeans
Show Source
Nested Schema : Target References
Type: array
Title: Target References
Contains the array of target references.

The server instances, clusters, or migratable targets defined in the current domain that are candidates for hosting a file store, JDBC store, or replicated store. If scoped to a Resource Group or Resource Group Template, the target is inherited from the Virtual Target.

When selecting a cluster, the store must be targeted to the same cluster as the JMS server. When selecting a migratable target, the store must be targeted it to the same migratable target as the migratable JMS server or SAF agent. As a best practice, a path service should use its own custom store and share the same target as the store.

Constraints

  • not visible for domain scoped mbeans
Show Source
Nested Schema : Target Reference
Type: object
Title: Target Reference
Contains the target reference.
Show Source
Nested Schema : Identity
Type: array
Title: Identity
DOC TEAM TBD - describe an identity - it's a reference to another WLS REST resource.
Show Source
Back to Top