2011年7月31日日曜日

@coherence*web, Can't login admin console by NullPointerException at weblogic.servlet.internal.session.CoherenceWebSessionContextImpl.getSessionInterna

Problem:
  After out of memory at coherence,
I started the cache server and weblogic admin server.
When I accessed the admin console following error occured and can't
login.

Log:
<2011/07/31 0時36分53秒 JST> <Error> <HTTP> <BEA-101020>
<[ServletContext@50606836[app:consoleapp module:console path:/c
onsole spec-version:2.5]]サーブレットは例外により失敗しました
java.lang.NullPointerException
at
weblogic.servlet.internal.session.CoherenceWebSessionContextImpl.getSessionInternal(CoherenceWebSessionContex
tImpl.java:516)
at
weblogic.servlet.internal.ServletRequestImpl$SessionHelper.updateSessionId(ServletRequestImpl.java:2978)
at
weblogic.servlet.security.internal.SecurityModule.generateNewSession(SecurityModule.java:318)
at
weblogic.servlet.security.internal.SecurityModule.login(SecurityModule.java:305)
at
weblogic.servlet.security.internal.FormSecurityModule.processJSecurityCheck(FormSecurityModule.java:302)
Truncated. see log file for complete stacktrace
>

2011年7月29日金曜日

Fwd: Fwd: Coherence cache start 2 two times, NoClassDef Exception happens even if there is a jar for it in WEB-INF

Seems like that if server's publish state is "republish", Eclipse's
"Run" will do a republish and a run. So two times coherence will start.
If state is "syncronized", "Run" will not a republish so coherence will
start only once.

Fwd: Coherence cache start 2 two times, NoClassDef Exception happens even if there is a jar for it in WEB-INF

Found different solution.
Delete folder at
workspace\.metadata\.plugins\org.eclipse.core.resources\.projects\XXXX\.indexes.
.indexes seems to be created every time it is published.

Coherence cache start 2 two times, NoClassDef Exception happens even if there is a jar for it in WEB-INF

Problem:
@Eclipse3.6 weblogic oepe
Coherence cache start 2 two times
NoClassDef Exception happens even if there is a jar for it in WEB-INF
Solution:
Change publish setting
http://forums.oracle.com/forums/thread.jspa?threadID=2147007&tstart=0

管理コンソールから管理対象サーバーの起動ができない。

Problem: 管理コンソールから管理対象サーバーの起動ができない。
Solution: マシン設定-のーどまねーじゃーのリスニングポート
と\wlserver_10.3\common\nodemanager \nodemanager.propertiesの
ListenPortは一致する必要がある。

When executing run from ecilpse , error saying 'connection refuse' occures.

Problem:
When executing run from ecilpse , error saying 'connection refuse'
occures.
Solution:
Probably need to access manage server by browser at lease once.

@Coherence*Web error at weblogic coherence launch. weblogic.application.ModuleException: No storage-enabled nodes exist for service DistributedSessions

Solved:
needed to set -Dtangosol.coherence.session.localstorage=true and not
-Dtangosol.coherence.distributed.localstorage=true
as java option when using Coherence*Web

Log:
<2011/07/25 17時53分40秒 JST> <Error> <Deployer> <BEA-149231> <アプリ
ケーション'CoherenceHandsOn2_2'に対してアクティブ化
状態をtrueに設定できませんでした。
weblogic.application.ModuleException: No storage-enabled nodes exist for
service DistributedSessions
at
weblogic.servlet.internal.WebAppModule.activateContexts(WebAppModule.java:1497)
at
weblogic.servlet.internal.WebAppModule.activate(WebAppModule.java:438)
at
weblogic.application.internal.flow.ModuleStateDriver$2.next(ModuleStateDriver.java:375)
at
weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52)
at
weblogic.application.internal.flow.ModuleStateDriver.activate(ModuleStateDriver.java:95)
Truncated. see log file for complete stacktrace
Caused By: com.tangosol.net.RequestPolicyException: No storage-enabled
nodes exist for service DistributedSessions
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$
BinaryMap.onMissingStorage(PartitionedCache.CDB:27)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$
BinaryMap.sendStorageRequest(PartitionedCache.CDB:15)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$
BinaryMap.addIndex(PartitionedCache.CDB:11)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$
ViewMap.addIndex(PartitionedCache.CDB:1)
at
com.tangosol.coherence.component.util.SafeNamedCache.addIndex(SafeNamedCache.CDB:1)
Truncated. see log file for complete stacktrace
>
2011-07-25 17:53:40.618/13.343 Oracle Coherence GE 3.7.0.0 <D5>
(thread=Invocation:Management, member=2): Service Manage
ment left the cluster
2011-07-25 17:53:40.629/13.354 Oracle Coherence GE 3.7.0.0 <D5>
(thread=DistributedCache:DistributedSessions, member=2):
Service DistributedSessions left the cluster
2011-07-25 17:53:40.635/13.360 Oracle Coherence GE 3.7.0.0 <D5>
(thread=Cluster, member=2): Service Cluster left the clu
ster
<2011/07/25 17時53分40秒 JST> <Notice> <Log Management> <BEA-170027>
<サーバーはドメイン・レベルの診断サービスとの接続を
正常に確立しました。>
<2011/07/25 17時53分40秒 JST> <Notice> <WebLogicServer> <BEA-000365>
<サーバー状態がADMINに変化しました>
<2011/07/25 17時53分40秒 JST> <Notice> <WebLogicServer> <BEA-000365>
<サーバー状態がRESUMINGに変化しました>
<2011/07/25 17時53分40秒 JST> <Warning> <Server> <BEA-002611> <ホスト
名"NSEKIYA-jp.jp.oracle.com"が複数のIPアドレスにマ
ップされています: 10.185.225.2, 192.168.99.1, 0:0:0:0:0:0:0:1>
<2011/07/25 17時53分40秒 JST> <Notice> <Server> <BEA-002613> <チャネ
ル"Default"は、現在192.168.99.1: 7002でプロトコルiio
p, t3, ldap, snmp, httpをリスニングしています。>
<2011/07/25 17時53分40秒 JST> <Notice> <Server> <BEA-002613> <チャネ
ル"Default[2]"は、現在127.0.0.1: 7002でプロトコルiio
p, t3, ldap, snmp, httpをリスニングしています。>
<2011/07/25 17時53分40秒 JST> <Notice> <Server> <BEA-002613> <チャネ
ル"Default[1]"は、現在10.185.225.2: 7002でプロトコル
iiop, t3, ldap, snmp, httpをリスニングしています。>
<2011/07/25 17時53分40秒 JST> <Notice> <Server> <BEA-002613> <チャネ
ル"Default[3]"は、現在0:0:0:0:0:0:0:1: 7002でプロト
コルiiop, t3, ldap, snmp, httpをリスニングしています。>
<2011/07/25 17時53分40秒 JST> <Notice> <WebLogicServer> <BEA-000332> <ド
メイン"base_domain"でWebLogic管理対象サーバー"Se
rver-0"を開発モードで起動しました>
<2011/07/25 17時53分40秒 JST> <Notice> <WebLogicServer> <BEA-000365>
<サーバー状態がRUNNINGに変化しました>
<2011/07/25 17時53分40秒 JST> <Notice> <WebLogicServer> <BEA-000360>
<サーバーがRUNNINGモードで起動しました>

Not enough permGen space at starting Coherence*Web Weglogic server

Problem:
Not enough permGen space at starting Coherence*Web Weglogic server
Solution:
Raise mem to min:128, max:256 at $DOMAIN/bin/setDomainEnv.cmd

LOG:
2011-07-26 17:43:14.810/18.603 Oracle Coherence GE 3.7.0.0 <Info>
(thread=[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default
(self-tuning)', member=4): Registering MBean using object name
"type=WebLogicHttpSessionManager,nodeId=4,appId=CoherenceHandson4_1CoherenceHandson4_1.war"
<2011/07/26 17時43分14秒 JST> <Notice> <Log Management> <BEA-170027>
<サーバーはドメイン・レベルの診断サービスとの接続を正常に確立しました。>
<2011/07/26 17時43分15秒 JST> <Notice> <WebLogicServer> <BEA-000365>
<サーバー状態がADMINに変化しました>
<2011/07/26 17時43分16秒 JST> <Critical> <WebLogicServer> <BEA-000386>
<サーバー・サブシステムに障害が発生しました。理由:
java.lang.OutOfMemoryError: PermGen space
java.lang.OutOfMemoryError: PermGen space
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:630)
at java.lang.ClassLoader.defineClass(ClassLoader.java:614)
at
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
Truncated. see log file for complete stacktrace

Coherence*Web session-cache-config.xml differences between jars they're in.

Diff:
xml in webInstaller.jar has tag <serializer> added.
Log:
**************** session-cache-config.xml in coherence-web-spi.war
*****************
<?xml version="1.0"?>
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -->
<!--
-->
<!-- Cache configuration descriptor for
Coherence*Web -->
<!--
-->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -->
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"

xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
coherence-cache-config.xsd">
<caching-scheme-mapping>
<!--
The clustered cache used to store Session management data.
-->
<cache-mapping>
<cache-name>session-management</cache-name>
<scheme-name>replicated</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store ServletContext attributes.
-->
<cache-mapping>
<cache-name>servletcontext-storage</cache-name>
<scheme-name>replicated</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store Session attributes.
-->
<cache-mapping>
<cache-name>session-storage</cache-name>
<scheme-name>session-near</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store the "overflowing" (split-out due
to size)
Session attributes. Only used for the "Split" model.
-->
<cache-mapping>
<cache-name>session-overflow</cache-name>
<scheme-name>session-distributed</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store IDs of "recently departed" Sessions.
-->
<cache-mapping>
<cache-name>session-death-certificates</cache-name>
<scheme-name>session-certificate</scheme-name>
</cache-mapping>
<!--
The local cache used to store Sessions that are not yet distributed (if
there is a distribution controller).
-->
<cache-mapping>
<cache-name>local-session-storage</cache-name>
<scheme-name>unlimited-local</scheme-name>
</cache-mapping>
<!--
The local cache used to store Session attributes that are not
distributed
(if there is a distribution controller or attributes are allowed to
become
local when serialization fails).
-->
<cache-mapping>
<cache-name>local-attribute-storage</cache-name>
<scheme-name>unlimited-local</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
Replicated caching scheme used by the Session management and
ServletContext
attribute caches.
-->
<replicated-scheme>
<scheme-name>replicated</scheme-name>
<service-name>ReplicatedSessionsMisc</service-name>
<request-timeout>30s</request-timeout>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-local</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</replicated-scheme>
<!--
Near caching scheme used by the Session attribute cache. The front
cache
uses a Local caching scheme and the back cache uses a Distributed
caching
scheme.
-->
<near-scheme>
<scheme-name>session-near</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>session-front</scheme-ref>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>session-distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>present</invalidation-strategy>
</near-scheme>
<local-scheme>
<scheme-name>session-front</scheme-name>
<eviction-policy>HYBRID</eviction-policy>
<high-units>1000</high-units>
<low-units>750</low-units>
</local-scheme>
<distributed-scheme>
<scheme-name>session-distributed</scheme-name>
<scheme-ref>session-base</scheme-ref>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-local</scheme-ref>
</local-scheme>
<!-- for disk overflow use this backing scheme instead:
<overflow-scheme>
<scheme-ref>session-paging</scheme-ref>
</overflow-scheme>
-->
</backing-map-scheme>
</distributed-scheme>
<!--
Distributed caching scheme used by the "recently departed" Session
cache.
-->
<distributed-scheme>
<scheme-name>session-certificate</scheme-name>
<scheme-ref>session-base</scheme-ref>
<backing-map-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>4000</high-units>
<low-units>3000</low-units>
<expiry-delay>86400</expiry-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
<!--
"Base" Distributed caching scheme that defines common configuration.
-->
<distributed-scheme>
<scheme-name>session-base</scheme-name>
<service-name>DistributedSessions</service-name>
<thread-count>0</thread-count>
<lease-granularity>member</lease-granularity>
<local-storage
system-property="tangosol.coherence.session.localstorage">false</local-storage>
<partition-count>257</partition-count>
<backup-count>1</backup-count>
<backup-storage>
<type>on-heap</type>
</backup-storage>
<request-timeout>30s</request-timeout>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-local</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<!--
Disk-based Session attribute overflow caching scheme.
-->
<overflow-scheme>
<scheme-name>session-paging</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>session-front</scheme-ref>
</local-scheme>
</front-scheme>
<back-scheme>
<external-scheme>
<bdb-store-manager/>
</external-scheme>
</back-scheme>
</overflow-scheme>
<!--
Local caching scheme definition used by all caches that do not
require an
eviction policy.
-->
<local-scheme>
<scheme-name>unlimited-local</scheme-name>
<service-name>LocalSessionCache</service-name>
</local-scheme>
<!--
Clustered invocation service that manages sticky session ownership.
-->
<invocation-scheme>
<service-name>SessionOwnership</service-name>
<request-timeout>30s</request-timeout>
</invocation-scheme>
</caching-schemes>
</cache-config>
*********************************
**************** session-cache-config.xml in
web-installer.jar/web-install/ *****************
<?xml version="1.0"?>
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -->
<!--
-->
<!-- Cache configuration descriptor for
Coherence*Web -->
<!--
-->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- -->
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"

xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
coherence-cache-config.xsd">
<caching-scheme-mapping>
<!--
The clustered cache used to store Session management data.
-->
<cache-mapping>
<cache-name>session-management</cache-name>
<scheme-name>replicated</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store ServletContext attributes.
-->
<cache-mapping>
<cache-name>servletcontext-storage</cache-name>
<scheme-name>replicated</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store Session attributes.
-->
<cache-mapping>
<cache-name>session-storage</cache-name>
<scheme-name>session-near</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store the "overflowing" (split-out due
to size)
Session attributes. Only used for the "Split" model.
-->
<cache-mapping>
<cache-name>session-overflow</cache-name>
<scheme-name>session-distributed</scheme-name>
</cache-mapping>
<!--
The clustered cache used to store IDs of "recently departed" Sessions.
-->
<cache-mapping>
<cache-name>session-death-certificates</cache-name>
<scheme-name>session-certificate</scheme-name>
</cache-mapping>
<!--
The local cache used to store Sessions that are not yet distributed (if
there is a distribution controller).
-->
<cache-mapping>
<cache-name>local-session-storage</cache-name>
<scheme-name>unlimited-local</scheme-name>
</cache-mapping>
<!--
The local cache used to store Session attributes that are not
distributed
(if there is a distribution controller or attributes are allowed to
become
local when serialization fails).
-->
<cache-mapping>
<cache-name>local-attribute-storage</cache-name>
<scheme-name>unlimited-local</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
Replicated caching scheme used by the Session management and
ServletContext
attribute caches.
-->
<replicated-scheme>
<scheme-name>replicated</scheme-name>
<service-name>ReplicatedSessionsMisc</service-name>
<serializer>
<instance>
<class-name>com.tangosol.io.DefaultSerializer</class-name>
</instance>
</serializer>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-local</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</replicated-scheme>
<!--
Near caching scheme used by the Session attribute cache. The front
cache
uses a Local caching scheme and the back cache uses a Distributed
caching
scheme.
-->
<near-scheme>
<scheme-name>session-near</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>session-front</scheme-ref>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>session-distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>present</invalidation-strategy>
</near-scheme>
<local-scheme>
<scheme-name>session-front</scheme-name>
<eviction-policy>HYBRID</eviction-policy>
<high-units>1000</high-units>
<low-units>750</low-units>
</local-scheme>
<distributed-scheme>
<scheme-name>session-distributed</scheme-name>
<scheme-ref>session-base</scheme-ref>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-local</scheme-ref>
</local-scheme>
<!-- for disk overflow use this backing scheme instead:
<overflow-scheme>
<scheme-ref>session-paging</scheme-ref>
</overflow-scheme>
-->
</backing-map-scheme>
</distributed-scheme>
<!--
Distributed caching scheme used by the "recently departed" Session
cache.
-->
<distributed-scheme>
<scheme-name>session-certificate</scheme-name>
<scheme-ref>session-base</scheme-ref>
<backing-map-scheme>
<local-scheme>
<eviction-policy>HYBRID</eviction-policy>
<high-units>4000</high-units>
<low-units>3000</low-units>
<expiry-delay>86400</expiry-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
<!--
"Base" Distributed caching scheme that defines common configuration.
-->
<distributed-scheme>
<scheme-name>session-base</scheme-name>
<service-name>DistributedSessions</service-name>
<serializer>
<instance>
<class-name>com.tangosol.io.DefaultSerializer</class-name>
</instance>
</serializer>
<thread-count>0</thread-count>
<lease-granularity>member</lease-granularity>
<local-storage
system-property="tangosol.coherence.session.localstorage">false</local-storage>
<partition-count>257</partition-count>
<backup-count>1</backup-count>
<backup-storage>
<type>on-heap</type>
</backup-storage>
<backing-map-scheme>
<local-scheme>
<scheme-ref>unlimited-local</scheme-ref>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<!--
Disk-based Session attribute overflow caching scheme.
-->
<overflow-scheme>
<scheme-name>session-paging</scheme-name>
<front-scheme>
<local-scheme>
<scheme-ref>session-front</scheme-ref>
</local-scheme>
</front-scheme>
<back-scheme>
<external-scheme>
<bdb-store-manager/>
</external-scheme>
</back-scheme>
</overflow-scheme>
<!--
Local caching scheme definition used by all caches that do not
require an
eviction policy.
-->
<local-scheme>
<scheme-name>unlimited-local</scheme-name>
<service-name>LocalSessionCache</service-name>
</local-scheme>
<!--
Clustered invocation service that manages sticky session ownership.
-->
<invocation-scheme>
<service-name>SessionOwnership</service-name>
<serializer>
<instance>
<class-name>com.tangosol.io.DefaultSerializer</class-name>
</instance>
</serializer>
</invocation-scheme>
</caching-schemes>
</cache-config>
******************************

@Coherence*Web setup. Cache server and weblogic coherence server will not be on the same cluster.

Problem:
I launched another cache server. That cache server listend to
differenet port but joined the same cluster as other cache server.
So port number does not seem to be the problem.
Solved:
I used different coherence versions.
Cache server was 3.7. Weblogic used 3.6's coherence.jar.
Tested with plain 3.6 cache server and 3.7 cache server with same
configuration. They did'nt make a cluster. splitted.

Logs:
cache server: Group{Address=224.3.6.0, Port=36000, TTL=4}
MasterMemberSet
(
ThisMember=Member(Id=1, Timestamp=2011-07-25 12:47:37.437,
Address=192.168.99.1:8090, MachineId=2817, Location=site:jp
.oracle.com,machine:NSEKIYA-jp,process:5568, Role=WeblogicServer)
OldestMember=Member(Id=1, Timestamp=2011-07-25 12:47:37.437,
Address=192.168.99.1:8090, MachineId=2817, Location=site:
jp.oracle.com,machine:NSEKIYA-jp,process:5568, Role=WeblogicServer)
ActualMemberSet=MemberSet(Size=1, BitSetCount=2
Member(Id=1, Timestamp=2011-07-25 12:47:37.437,
Address=192.168.99.1:8090, MachineId=2817, Location=site:jp.oracle.c
om,machine:NSEKIYA-jp,process:5568, Role=WeblogicServer)
)
RecycleMillis=1200000
RecycleSet=MemberSet(Size=0, BitSetCount=0
)
)
*************************
weblogic: Group{Address=224.3.7.0, Port=37000, TTL=4}
MasterMemberSet
(
ThisMember=Member(Id=1, Timestamp=2011-07-25 12:02:13.189,
Address=192.168.99.1:8088, MachineId=2817, Location=site:jp
.oracle.com,machine:NSEKIYA-jp,process:588, Role=CoherenceConsole)
OldestMember=Member(Id=1, Timestamp=2011-07-25 12:02:13.189,
Address=192.168.99.1:8088, MachineId=2817, Location=site:
jp.oracle.com,machine:NSEKIYA-jp,process:588, Role=CoherenceConsole)
ActualMemberSet=MemberSet(Size=1, BitSetCount=2
Member(Id=1, Timestamp=2011-07-25 12:02:13.189,
Address=192.168.99.1:8088, MachineId=2817, Location=site:jp.oracle.c
om,machine:NSEKIYA-jp,process:588, Role=CoherenceConsole)
)
RecycleMillis=1200000
RecycleSet=MemberSet(Size=0, BitSetCount=0
)
)
*************************
<2011/07/25 12時47分41秒 JST> <Error> <Deployer> <BEA-149231> <アプリ
ケーション'CoherenceHandsOn2'に対してアクティブ化状
態をtrueに設定できませんでした。
weblogic.application.ModuleException: No storage-enabled nodes exist for
service DistributedSessions
at
weblogic.servlet.internal.WebAppModule.activateContexts(WebAppModule.java:1497)
at
weblogic.servlet.internal.WebAppModule.activate(WebAppModule.java:438)
at
weblogic.application.internal.flow.ModuleStateDriver$2.next(ModuleStateDriver.java:375)
at
weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:52)
at
weblogic.application.internal.flow.ModuleStateDriver.activate(ModuleStateDriver.java:95)
Truncated. see log file for complete stacktrace
Caused By: com.tangosol.net.RequestPolicyException: No storage-enabled
nodes exist for service DistributedSessions
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$
BinaryMap.onMissingStorage(PartitionedCache.CDB:23)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$
BinaryMap.sendStorageRequest(PartitionedCache.CDB:15)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$
BinaryMap.addIndex(PartitionedCache.CDB:11)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$
ViewMap.addIndex(PartitionedCache.CDB:1)
at
com.tangosol.coherence.component.util.SafeNamedCache.addIndex(SafeNamedCache.CDB:1)
Truncated. see log file for complete stacktrace
>
2011-07-25 12:47:41.632/15.943 Oracle Coherence GE 3.6.0.4 <D5>
(thread=Invocation:Management, member=1): Service Manage
ment left the cluster

Error when starting manage-sever using Coherence*Web

log:
2011-07-25 12:01:39.819/13.122 Oracle Coherence GE 3.6.0.4 <Error>
(thread=[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default
(self-tuning)', member=n/a): Error while starting cluster:
java.lang.RuntimeException: Failed to start Service "Cluster"
(ServiceState=SERVICE_STOPPED, STATE_JOINING)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:38)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:6)
at
com.tangosol.coherence.component.net.Cluster.onStart(Cluster.CDB:637)
at
com.tangosol.coherence.component.net.Cluster.start(Cluster.CDB:11)
at
com.tangosol.coherence.component.util.SafeCluster.startCluster(SafeCluster.CDB:3)
at
com.tangosol.coherence.component.util.SafeCluster.restartCluster(SafeCluster.CDB:7)
at
com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluster(SafeCluster.CDB:26)
at
com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.CDB:2)
at
com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:998)
solve:
the coherence configuration has mismatch between coherence cache
server and weblogic application coherence.
see
http://download.oracle.com/docs/cd/E18686_01/coh.37/e18690/cweb_wls.htm#CHDEEJCE
coherence.cmd sample that works
@echo off
@
@rem This will start a console application
@rem demonstrating the functionality of the Coherence(tm) API
@
setlocal
:config
@rem specify the Coherence installation directory
set coherence_home=%~dp0\..
@rem specify if the console will also act as a server
set storage_enabled=false
@rem specify the JVM heap size
set memory=128m

:start
if not exist "%coherence_home%\lib\coherence.jar" goto instructions
if "%java_home%"=="" (set java_exec=java) else (set
java_exec=%java_home%\bin\java)

:launch
if "%1"=="-jmx" (
set jmxproperties=-Dcom.sun.management.jmxremote
-Dtangosol.coherence.management=all
-Dtangosol.coherence.management.remote=true
shift
)
@rem java_opts for Coherence*Web reference page:
http://download.oracle.com/docs/cd/E18686_01/coh.37/e18690/cweb_wls.htm#CHDEEJCE
set java_opts=-Xms%memory% -Xmx%memory% -cp
%coherence_home%/lib/coherence.jar;%coherence_home%/lib/coherence-web-spi.war
^
-Dtangosol.coherence.management.remote=true
-Dtangosol.coherence.cacheconfig=WEB-INF/classes/session-cache-config.xml ^
-Dtangosol.coherence.distributed.localstorage=true %jmxproperties%

%java_exec% -server -showversion %java_opts%
com.tangosol.net.CacheFactory %1
goto exit
:instructions
echo Usage:
echo ^<coherence_home^>\bin\coherence.cmd
goto exit
:exit
endlocal
@echo on

Glassfish will not use coherence as httpsession.

Problem
Glassfish will not use coherence as httpsession.
@Glassfish 3.1 Coherence 3.7
Solution
session-cache-config.xml must be wrong.
need to use the one included in webInstaller.jar

-> nope, this wasn't the problem

My guess is that there is a problem with glassfish spi.

But no detail doc about glassfish spi.

Pending state

2011年7月25日月曜日