Discussion:
problem in configuring hbase with hdfs
(too old to reply)
Puri, Aseem
2009-03-17 07:32:31 UTC
Permalink
Hi



I am newbie working on Hadoop - HBase. I am using Hadoop-0.18.0 and
HBase-0.18.1. There is some problem with using HBase master when HBase
uses HDFS. My Hadoop-site configuration is:



<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://localhost:9000</value>

</property>

<property>

<name>mapred.job.tracker</name>

<value>localhost:9001</value>

</property>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

</configuration>



And HBase-site configuration is:



<configuration>

<property>

<name>hbase.rootdir</name>

<value>hdfs://localhost:9000/hbase</value>

<description>The directory shared by region servers.

</description>

</property>

<property>

<name>hbase.master</name>

<value>localhost:60000</value>

<description>The host and port that the HBase master runs at.

</description>

</property>

<property>

<name>hbase.regionserver</name>

<value>localhost:60020</value>

<description>The host and port a HBase region server runs at.

</description>

</property>

</configuration>



When I try command $ bin/hadoop dfs -ls / I got follwing result:



$ bin/hadoop dfs -ls /

Found 2 items

drwxr-xr-x - HadoopAdmin supergroup 0 2009-03-17 12:22 /hbase

drwxr-xr-x - HadoopAdmin supergroup 0 2009-03-17 12:21 /tmp



But when I use HBase command list in $ bin/hbase shell I got following
exception:



hbase(main):001:0> list

09/03/17 12:24:24 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 0 time(s).

09/03/17 12:24:26 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 1 time(s).

09/03/17 12:24:28 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 2 time(s).

09/03/17 12:24:29 INFO client.HConnectionManager$TableServers: Attempt 0
of 5 fa

iled with <java.io.IOException: Call failed on local exception>.
Retrying after

sleep of 2000

09/03/17 12:24:33 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 0 time(s).

09/03/17 12:24:35 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 1 time(s).

09/03/17 12:24:37 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 2 time(s).

09/03/17 12:24:38 INFO client.HConnectionManager$TableServers: Attempt 1
of 5 fa

iled with <java.io.IOException: Call failed on local exception>.
Retrying after

sleep of 2000

09/03/17 12:24:42 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 0 time(s).

09/03/17 12:24:44 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 1 time(s).

09/03/17 12:24:46 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 2 time(s).

09/03/17 12:24:48 INFO client.HConnectionManager$TableServers: Attempt 2
of 5 fa

iled with <java.io.IOException: Call failed on local exception>.
Retrying after

sleep of 2000

09/03/17 12:24:52 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 0 time(s).

09/03/17 12:24:54 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 1 time(s).

09/03/17 12:24:56 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 2 time(s).

09/03/17 12:24:57 INFO client.HConnectionManager$TableServers: Attempt 3
of 5 fa

iled with <java.io.IOException: Call failed on local exception>.
Retrying after

sleep of 4000

09/03/17 12:25:03 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 0 time(s).

09/03/17 12:25:05 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 1 time(s).

09/03/17 12:25:07 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0

.1:60000. Already tried 2 time(s).

NativeException: org.apache.hadoop.hbase.MasterNotRunningException:
localhost:60

000

from
org/apache/hadoop/hbase/client/HConnectionManager.java:221:in `getM

aster'

from org/apache/hadoop/hbase/client/HBaseAdmin.java:67:in
`<init>'

from sun/reflect/NativeConstructorAccessorImpl.java:-2:in
`newInstance0'



from sun/reflect/NativeConstructorAccessorImpl.java:39:in
`newInstance'

from sun/reflect/DelegatingConstructorAccessorImpl.java:27:in
`newInstan

ce'

from java/lang/reflect/Constructor.java:513:in `newInstance'

from org/jruby/javasupport/JavaConstructor.java:195:in
`new_instance'

from
org.jruby.javasupport.JavaConstructorInvoker$new_instance_method_0_

0:-1:in `call'

from org/jruby/runtime/CallSite.java:261:in `call'

from org/jruby/evaluator/ASTInterpreter.java:670:in `callNode'

from org/jruby/evaluator/ASTInterpreter.java:324:in
`evalInternal'

from org/jruby/evaluator/ASTInterpreter.java:2173:in `setupArgs'

from org/jruby/evaluator/ASTInterpreter.java:571:in
`attrAssignNode'

from org/jruby/evaluator/ASTInterpreter.java:309:in
`evalInternal'

from org/jruby/evaluator/ASTInterpreter.java:620:in `blockNode'

from org/jruby/evaluator/ASTInterpreter.java:318:in
`evalInternal'

... 178 levels...

from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO

CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `__file__'

from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO

CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `load'

from org/jruby/Ruby.java:512:in `runScript'

from org/jruby/Ruby.java:432:in `runNormally'

from org/jruby/Ruby.java:312:in `runFromMain'

from org/jruby/Main.java:144:in `run'

from org/jruby/Main.java:89:in `run'

from org/jruby/Main.java:80:in `main'

from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet

e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:23:in `initialize'

from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet

e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'

from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet

e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'

from C:/DOCUME~1/HADOOP~1/hbase/bin/HBase.rb:37:in `initialize'

from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `new'

from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `admin'

from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:242:in `list'

from (hbase):2:in `binding'hbase(main):002:0>





Also when I try to stop HBase with the command $ bin/stop-hbase.sh I got
the message:

no master to stop



When I change my HBase-site configuration to:



<configuration>

<property>

<name>hbase.master</name>

<value>localhost:60000</value>

<description>The host and port that the HBase master runs at.

</description>

</property>

<property>

<name>hbase.regionserver</name>

<value>localhost:60020</value>

<description>The host and port a HBase region server runs at.

</description>

</property>

</configuration>



My master starts working as HBase is now using local file system.



But when I want to use HDFS and change HBase-site configuration then my
master do not starts. Please tell me how I should configure so my HBase
master starts working and HBase use HDFS. Hope you people help me in
this.



-Aseem
Jean-Daniel Cryans
2009-03-17 11:56:05 UTC
Permalink
Aseem,

It tells you that there is no master to stop so it means that
something went wrong when it got started and shut down by itself. Can
you look in your master log and see if there are any exceptions
thrown?

Thx,

J-D
Post by Puri, Aseem
Hi
I am newbie working on Hadoop - HBase. I am using Hadoop-0.18.0 and
HBase-0.18.1. There is some problem with using HBase master when HBase
<configuration>
 <property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:9000</value>
 </property>
 <property>
   <name>mapred.job.tracker</name>
   <value>localhost:9001</value>
 </property>
 <property>
   <name>dfs.replication</name>
   <value>1</value>
 </property>
</configuration>
<configuration>
 <property>
   <name>hbase.rootdir</name>
   <value>hdfs://localhost:9000/hbase</value>
   <description>The directory shared by region servers.
   </description>
 </property>
 <property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
$ bin/hadoop dfs -ls /
Found 2 items
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:22 /hbase
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:21 /tmp
But when I use HBase command list in $ bin/hbase shell I got following
hbase(main):001:0> list
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:29 INFO client.HConnectionManager$TableServers: Attempt 0
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:38 INFO client.HConnectionManager$TableServers: Attempt 1
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:48 INFO client.HConnectionManager$TableServers: Attempt 2
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:57 INFO client.HConnectionManager$TableServers: Attempt 3
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 4000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
localhost:60
000
       from
org/apache/hadoop/hbase/client/HConnectionManager.java:221:in `getM
aster'
       from org/apache/hadoop/hbase/client/HBaseAdmin.java:67:in
`<init>'
       from sun/reflect/NativeConstructorAccessorImpl.java:-2:in
`newInstance0'
       from sun/reflect/NativeConstructorAccessorImpl.java:39:in
`newInstance'
       from sun/reflect/DelegatingConstructorAccessorImpl.java:27:in
`newInstan
ce'
       from java/lang/reflect/Constructor.java:513:in `newInstance'
       from org/jruby/javasupport/JavaConstructor.java:195:in
`new_instance'
       from
org.jruby.javasupport.JavaConstructorInvoker$new_instance_method_0_
0:-1:in `call'
       from org/jruby/runtime/CallSite.java:261:in `call'
       from org/jruby/evaluator/ASTInterpreter.java:670:in `callNode'
       from org/jruby/evaluator/ASTInterpreter.java:324:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:2173:in `setupArgs'
       from org/jruby/evaluator/ASTInterpreter.java:571:in
`attrAssignNode'
       from org/jruby/evaluator/ASTInterpreter.java:309:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:620:in `blockNode'
       from org/jruby/evaluator/ASTInterpreter.java:318:in
`evalInternal'
... 178 levels...
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `__file__'
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `load'
       from org/jruby/Ruby.java:512:in `runScript'
       from org/jruby/Ruby.java:432:in `runNormally'
       from org/jruby/Ruby.java:312:in `runFromMain'
       from org/jruby/Main.java:144:in `run'
       from org/jruby/Main.java:89:in `run'
       from org/jruby/Main.java:80:in `main'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:23:in `initialize'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from C:/DOCUME~1/HADOOP~1/hbase/bin/HBase.rb:37:in `initialize'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `new'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `admin'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:242:in `list'
       from (hbase):2:in `binding'hbase(main):002:0>
Also when I try to stop HBase with the command $ bin/stop-hbase.sh I got
no master to stop
<configuration>
<property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
My master starts working as HBase is now using local file system.
But when I want to use HDFS and change HBase-site configuration then my
master do not starts. Please tell me how I should configure so my HBase
master starts working and HBase use HDFS. Hope you people help me in
this.
-Aseem
Puri, Aseem
2009-03-18 04:24:27 UTC
Permalink
Hi
There is exception while staring HBase master. The exceptions are:

Wed Mar 18 09:10:51 IST 2009 Starting master on ie11dtxpficbfise
java version "1.6.0_11"
Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
Java HotSpot(TM) Client VM (build 11.0-b16, mixed mode, sharing)
ulimit -n 256
2009-03-18 09:10:55,650 INFO org.apache.hadoop.hbase.master.HMaster: Root region dir: hdfs://localhost:9000/hbase/-ROOT-/70236052
2009-03-18 09:10:55,806 FATAL org.apache.hadoop.hbase.master.HMaster: Not starting HMaster because:
java.io.EOFException
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
at java.io.DataInputStream.readUTF(DataInputStream.java:572)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:101)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:120)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:203)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
2009-03-18 09:10:55,806 ERROR org.apache.hadoop.hbase.master.HMaster: Can not start master
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
Caused by: java.io.EOFException
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
at java.io.DataInputStream.readUTF(DataInputStream.java:572)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:101)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:120)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:203)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
... 6 more

Also when I start my start my hadoop server datanode not started it also
throw exceptions. Datanode exceptions are:

2009-03-18 09:08:47,354 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = ie11dtxpficbfise/199.63.66.65
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.18.0
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 686010; compiled by 'hadoopqa' on Thu Aug 14 19:48:33 UTC 2008
************************************************************/
2009-03-18 09:08:56,667 ERROR org.apache.hadoop.dfs.DataNode: org.apache.hadoop.dfs.IncorrectVersionException: Unexpected version of storage directory C:\tmp\hadoop-HadoopAdmin\dfs\data. Reported: -18. Expecting = -16.
at org.apache.hadoop.dfs.Storage.getFields(Storage.java:584)
at org.apache.hadoop.dfs.DataStorage.getFields(DataStorage.java:171)
at org.apache.hadoop.dfs.Storage$StorageDirectory.read(Storage.java:164)
at org.apache.hadoop.dfs.Storage$StorageDirectory.read(Storage.java:153)
at org.apache.hadoop.dfs.DataStorage.doTransition(DataStorage.java:221)
at org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:141)
at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:273)
at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
at org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942)
at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)

2009-03-18 09:08:56,682 INFO org.apache.hadoop.dfs.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ie11dtxpficbfise/199.63.66.65
************************************************************/

So is there any relation of Hbase master with hadoop datanode exceptions that's why it do not start?

Please tell where is the problem bcoz of it Hbase master do not start

Thanks & Regards
Aseem Puri


-----Original Message-----
From: jdcryans-***@public.gmane.org [mailto:jdcryans-***@public.gmane.org] On Behalf Of Jean-Daniel Cryans
Sent: Tuesday, March 17, 2009 5:26 PM
To: hbase-user-7ArZoLwFLBtd/SJB6HiN2Ni2O/***@public.gmane.org
Subject: Re: problem in configuring hbase with hdfs

Aseem,

It tells you that there is no master to stop so it means that
something went wrong when it got started and shut down by itself. Can
you look in your master log and see if there are any exceptions
thrown?

Thx,

J-D
Post by Puri, Aseem
Hi
I am newbie working on Hadoop - HBase. I am using Hadoop-0.18.0 and
HBase-0.18.1. There is some problem with using HBase master when HBase
<configuration>
 <property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:9000</value>
 </property>
 <property>
   <name>mapred.job.tracker</name>
   <value>localhost:9001</value>
 </property>
 <property>
   <name>dfs.replication</name>
   <value>1</value>
 </property>
</configuration>
<configuration>
 <property>
   <name>hbase.rootdir</name>
   <value>hdfs://localhost:9000/hbase</value>
   <description>The directory shared by region servers.
   </description>
 </property>
 <property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
$ bin/hadoop dfs -ls /
Found 2 items
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:22 /hbase
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:21 /tmp
But when I use HBase command list in $ bin/hbase shell I got following
hbase(main):001:0> list
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:29 INFO client.HConnectionManager$TableServers: Attempt 0
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:38 INFO client.HConnectionManager$TableServers: Attempt 1
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:48 INFO client.HConnectionManager$TableServers: Attempt 2
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:57 INFO client.HConnectionManager$TableServers: Attempt 3
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 4000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
localhost:60
000
       from
org/apache/hadoop/hbase/client/HConnectionManager.java:221:in `getM
aster'
       from org/apache/hadoop/hbase/client/HBaseAdmin.java:67:in
`<init>'
       from sun/reflect/NativeConstructorAccessorImpl.java:-2:in
`newInstance0'
       from sun/reflect/NativeConstructorAccessorImpl.java:39:in
`newInstance'
       from sun/reflect/DelegatingConstructorAccessorImpl.java:27:in
`newInstan
ce'
       from java/lang/reflect/Constructor.java:513:in `newInstance'
       from org/jruby/javasupport/JavaConstructor.java:195:in
`new_instance'
       from
org.jruby.javasupport.JavaConstructorInvoker$new_instance_method_0_
0:-1:in `call'
       from org/jruby/runtime/CallSite.java:261:in `call'
       from org/jruby/evaluator/ASTInterpreter.java:670:in `callNode'
       from org/jruby/evaluator/ASTInterpreter.java:324:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:2173:in `setupArgs'
       from org/jruby/evaluator/ASTInterpreter.java:571:in
`attrAssignNode'
       from org/jruby/evaluator/ASTInterpreter.java:309:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:620:in `blockNode'
       from org/jruby/evaluator/ASTInterpreter.java:318:in
`evalInternal'
... 178 levels...
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `__file__'
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `load'
       from org/jruby/Ruby.java:512:in `runScript'
       from org/jruby/Ruby.java:432:in `runNormally'
       from org/jruby/Ruby.java:312:in `runFromMain'
       from org/jruby/Main.java:144:in `run'
       from org/jruby/Main.java:89:in `run'
       from org/jruby/Main.java:80:in `main'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:23:in `initialize'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from C:/DOCUME~1/HADOOP~1/hbase/bin/HBase.rb:37:in `initialize'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `new'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `admin'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:242:in `list'
       from (hbase):2:in `binding'hbase(main):002:0>
Also when I try to stop HBase with the command $ bin/stop-hbase.sh I got
no master to stop
<configuration>
<property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
My master starts working as HBase is now using local file system.
But when I want to use HDFS and change HBase-site configuration then my
master do not starts. Please tell me how I should configure so my HBase
master starts working and HBase use HDFS. Hope you people help me in
this.
-Aseem
Jean-Daniel Cryans
2009-03-18 12:41:04 UTC
Permalink
Aseem,

What happened here is that your master was able to create the
namespace on the namenode but then wasn't able to write the files
because no datanodes are alive (then it tries to read and it failed
because the file was empty).

That storage version exception means that you probably at some point
formatted your namenode but didn't delete the datanode's data. Do
that, start dfs, look for any exceptions (should be cleared but just
in case) then start HBase. You should probably reformat your namenode
first too.

Any reason why you are using 0.18?

J-D
Post by Puri, Aseem
Hi
Wed Mar 18 09:10:51 IST 2009 Starting master on ie11dtxpficbfise
java version "1.6.0_11"
Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
Java HotSpot(TM) Client VM (build 11.0-b16, mixed mode, sharing)
ulimit -n 256
2009-03-18 09:10:55,650 INFO org.apache.hadoop.hbase.master.HMaster: Root region dir: hdfs://localhost:9000/hbase/-ROOT-/70236052
java.io.EOFException
       at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
       at java.io.DataInputStream.readUTF(DataInputStream.java:572)
       at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:101)
       at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:120)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:203)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
       at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
2009-03-18 09:10:55,806 ERROR org.apache.hadoop.hbase.master.HMaster: Can not start master
java.lang.reflect.InvocationTargetException
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
       at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
Caused by: java.io.EOFException
       at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
       at java.io.DataInputStream.readUTF(DataInputStream.java:572)
       at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:101)
       at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:120)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:203)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
       ... 6 more
Also when I start my start my hadoop server datanode not started it also
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ie11dtxpficbfise/199.63.66.65
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.18.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 686010; compiled by 'hadoopqa' on Thu Aug 14 19:48:33 UTC 2008
************************************************************/
2009-03-18 09:08:56,667 ERROR org.apache.hadoop.dfs.DataNode: org.apache.hadoop.dfs.IncorrectVersionException: Unexpected version of storage directory C:\tmp\hadoop-HadoopAdmin\dfs\data. Reported: -18. Expecting = -16.
       at org.apache.hadoop.dfs.Storage.getFields(Storage.java:584)
       at org.apache.hadoop.dfs.DataStorage.getFields(DataStorage.java:171)
       at org.apache.hadoop.dfs.Storage$StorageDirectory.read(Storage.java:164)
       at org.apache.hadoop.dfs.Storage$StorageDirectory.read(Storage.java:153)
       at org.apache.hadoop.dfs.DataStorage.doTransition(DataStorage.java:221)
       at org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:141)
       at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:273)
       at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
       at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
       at org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942)
       at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
       at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ie11dtxpficbfise/199.63.66.65
************************************************************/
So is there any relation of Hbase master with hadoop datanode exceptions that's why it do not start?
Please tell where is the problem bcoz of it Hbase master do not start
Thanks & Regards
Aseem Puri
-----Original Message-----
Sent: Tuesday, March 17, 2009 5:26 PM
Subject: Re: problem in configuring hbase with hdfs
Aseem,
It tells you that there is no master to stop so it means that
something went wrong when it got started and shut down by itself. Can
you look in your master log and see if there are any exceptions
thrown?
Thx,
J-D
Post by Puri, Aseem
Hi
I am newbie working on Hadoop - HBase. I am using Hadoop-0.18.0 and
HBase-0.18.1. There is some problem with using HBase master when HBase
<configuration>
 <property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:9000</value>
 </property>
 <property>
   <name>mapred.job.tracker</name>
   <value>localhost:9001</value>
 </property>
 <property>
   <name>dfs.replication</name>
   <value>1</value>
 </property>
</configuration>
<configuration>
 <property>
   <name>hbase.rootdir</name>
   <value>hdfs://localhost:9000/hbase</value>
   <description>The directory shared by region servers.
   </description>
 </property>
 <property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
$ bin/hadoop dfs -ls /
Found 2 items
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:22 /hbase
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:21 /tmp
But when I use HBase command list in $ bin/hbase shell I got following
hbase(main):001:0> list
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:29 INFO client.HConnectionManager$TableServers: Attempt 0
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:38 INFO client.HConnectionManager$TableServers: Attempt 1
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:48 INFO client.HConnectionManager$TableServers: Attempt 2
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:57 INFO client.HConnectionManager$TableServers: Attempt 3
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 4000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
localhost:60
000
       from
org/apache/hadoop/hbase/client/HConnectionManager.java:221:in `getM
aster'
       from org/apache/hadoop/hbase/client/HBaseAdmin.java:67:in
`<init>'
       from sun/reflect/NativeConstructorAccessorImpl.java:-2:in
`newInstance0'
       from sun/reflect/NativeConstructorAccessorImpl.java:39:in
`newInstance'
       from sun/reflect/DelegatingConstructorAccessorImpl.java:27:in
`newInstan
ce'
       from java/lang/reflect/Constructor.java:513:in `newInstance'
       from org/jruby/javasupport/JavaConstructor.java:195:in
`new_instance'
       from
org.jruby.javasupport.JavaConstructorInvoker$new_instance_method_0_
0:-1:in `call'
       from org/jruby/runtime/CallSite.java:261:in `call'
       from org/jruby/evaluator/ASTInterpreter.java:670:in `callNode'
       from org/jruby/evaluator/ASTInterpreter.java:324:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:2173:in `setupArgs'
       from org/jruby/evaluator/ASTInterpreter.java:571:in
`attrAssignNode'
       from org/jruby/evaluator/ASTInterpreter.java:309:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:620:in `blockNode'
       from org/jruby/evaluator/ASTInterpreter.java:318:in
`evalInternal'
... 178 levels...
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `__file__'
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `load'
       from org/jruby/Ruby.java:512:in `runScript'
       from org/jruby/Ruby.java:432:in `runNormally'
       from org/jruby/Ruby.java:312:in `runFromMain'
       from org/jruby/Main.java:144:in `run'
       from org/jruby/Main.java:89:in `run'
       from org/jruby/Main.java:80:in `main'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:23:in `initialize'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from C:/DOCUME~1/HADOOP~1/hbase/bin/HBase.rb:37:in `initialize'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `new'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `admin'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:242:in `list'
       from (hbase):2:in `binding'hbase(main):002:0>
Also when I try to stop HBase with the command $ bin/stop-hbase.sh I got
no master to stop
<configuration>
<property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
My master starts working as HBase is now using local file system.
But when I want to use HDFS and change HBase-site configuration then my
master do not starts. Please tell me how I should configure so my HBase
master starts working and HBase use HDFS. Hope you people help me in
this.
-Aseem
Puri, Aseem
2009-03-18 15:03:21 UTC
Permalink
Thanks Jean, my problem is solved now.

I have removed all old data and then restarted my Hadoop server then my datanode starts. Also after that my HBase master starts working.

I am using Hadoop - HBase 0.18 bcoz my eclipse supports hadoop-0.18.0-eclipse-plugin.
When I switch to Hadoop-HBase 0.19 and use hadoop-0.19.0-eclipse-plugin then my eclipse doesn't show mapreduce perspective. I am using Eclipse Platform (GANYMEDE), Version: 3.4.1.

Can you tell which version of eclipse supports Hadoop - HBase 0.19 and can use hadoop-0.19.0-eclipse-plugin?

Thanks & Regards
Aseem Puri

-----Original Message-----
From: jdcryans-***@public.gmane.org [mailto:jdcryans-***@public.gmane.org] On Behalf Of Jean-Daniel Cryans
Sent: Wednesday, March 18, 2009 6:11 PM
To: hbase-user-7ArZoLwFLBtd/SJB6HiN2Ni2O/***@public.gmane.org
Subject: Re: problem in configuring hbase with hdfs

Aseem,

What happened here is that your master was able to create the
namespace on the namenode but then wasn't able to write the files
because no datanodes are alive (then it tries to read and it failed
because the file was empty).

That storage version exception means that you probably at some point
formatted your namenode but didn't delete the datanode's data. Do
that, start dfs, look for any exceptions (should be cleared but just
in case) then start HBase. You should probably reformat your namenode
first too.

Any reason why you are using 0.18?

J-D
Post by Puri, Aseem
Hi
Wed Mar 18 09:10:51 IST 2009 Starting master on ie11dtxpficbfise
java version "1.6.0_11"
Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
Java HotSpot(TM) Client VM (build 11.0-b16, mixed mode, sharing)
ulimit -n 256
2009-03-18 09:10:55,650 INFO org.apache.hadoop.hbase.master.HMaster: Root region dir: hdfs://localhost:9000/hbase/-ROOT-/70236052
java.io.EOFException
       at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
       at java.io.DataInputStream.readUTF(DataInputStream.java:572)
       at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:101)
       at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:120)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:203)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
       at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
2009-03-18 09:10:55,806 ERROR org.apache.hadoop.hbase.master.HMaster: Can not start master
java.lang.reflect.InvocationTargetException
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
       at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
Caused by: java.io.EOFException
       at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
       at java.io.DataInputStream.readUTF(DataInputStream.java:572)
       at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:101)
       at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:120)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:203)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
       ... 6 more
Also when I start my start my hadoop server datanode not started it also
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ie11dtxpficbfise/199.63.66.65
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.18.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 686010; compiled by 'hadoopqa' on Thu Aug 14 19:48:33 UTC 2008
************************************************************/
2009-03-18 09:08:56,667 ERROR org.apache.hadoop.dfs.DataNode: org.apache.hadoop.dfs.IncorrectVersionException: Unexpected version of storage directory C:\tmp\hadoop-HadoopAdmin\dfs\data. Reported: -18. Expecting = -16.
       at org.apache.hadoop.dfs.Storage.getFields(Storage.java:584)
       at org.apache.hadoop.dfs.DataStorage.getFields(DataStorage.java:171)
       at org.apache.hadoop.dfs.Storage$StorageDirectory.read(Storage.java:164)
       at org.apache.hadoop.dfs.Storage$StorageDirectory.read(Storage.java:153)
       at org.apache.hadoop.dfs.DataStorage.doTransition(DataStorage.java:221)
       at org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:141)
       at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:273)
       at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
       at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
       at org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942)
       at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
       at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ie11dtxpficbfise/199.63.66.65
************************************************************/
So is there any relation of Hbase master with hadoop datanode exceptions that's why it do not start?
Please tell where is the problem bcoz of it Hbase master do not start
Thanks & Regards
Aseem Puri
-----Original Message-----
Sent: Tuesday, March 17, 2009 5:26 PM
Subject: Re: problem in configuring hbase with hdfs
Aseem,
It tells you that there is no master to stop so it means that
something went wrong when it got started and shut down by itself. Can
you look in your master log and see if there are any exceptions
thrown?
Thx,
J-D
Post by Puri, Aseem
Hi
I am newbie working on Hadoop - HBase. I am using Hadoop-0.18.0 and
HBase-0.18.1. There is some problem with using HBase master when HBase
<configuration>
 <property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:9000</value>
 </property>
 <property>
   <name>mapred.job.tracker</name>
   <value>localhost:9001</value>
 </property>
 <property>
   <name>dfs.replication</name>
   <value>1</value>
 </property>
</configuration>
<configuration>
 <property>
   <name>hbase.rootdir</name>
   <value>hdfs://localhost:9000/hbase</value>
   <description>The directory shared by region servers.
   </description>
 </property>
 <property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
$ bin/hadoop dfs -ls /
Found 2 items
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:22 /hbase
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:21 /tmp
But when I use HBase command list in $ bin/hbase shell I got following
hbase(main):001:0> list
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:29 INFO client.HConnectionManager$TableServers: Attempt 0
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:38 INFO client.HConnectionManager$TableServers: Attempt 1
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:48 INFO client.HConnectionManager$TableServers: Attempt 2
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:57 INFO client.HConnectionManager$TableServers: Attempt 3
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 4000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
localhost:60
000
       from
org/apache/hadoop/hbase/client/HConnectionManager.java:221:in `getM
aster'
       from org/apache/hadoop/hbase/client/HBaseAdmin.java:67:in
`<init>'
       from sun/reflect/NativeConstructorAccessorImpl.java:-2:in
`newInstance0'
       from sun/reflect/NativeConstructorAccessorImpl.java:39:in
`newInstance'
       from sun/reflect/DelegatingConstructorAccessorImpl.java:27:in
`newInstan
ce'
       from java/lang/reflect/Constructor.java:513:in `newInstance'
       from org/jruby/javasupport/JavaConstructor.java:195:in
`new_instance'
       from
org.jruby.javasupport.JavaConstructorInvoker$new_instance_method_0_
0:-1:in `call'
       from org/jruby/runtime/CallSite.java:261:in `call'
       from org/jruby/evaluator/ASTInterpreter.java:670:in `callNode'
       from org/jruby/evaluator/ASTInterpreter.java:324:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:2173:in `setupArgs'
       from org/jruby/evaluator/ASTInterpreter.java:571:in
`attrAssignNode'
       from org/jruby/evaluator/ASTInterpreter.java:309:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:620:in `blockNode'
       from org/jruby/evaluator/ASTInterpreter.java:318:in
`evalInternal'
... 178 levels...
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `__file__'
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `load'
       from org/jruby/Ruby.java:512:in `runScript'
       from org/jruby/Ruby.java:432:in `runNormally'
       from org/jruby/Ruby.java:312:in `runFromMain'
       from org/jruby/Main.java:144:in `run'
       from org/jruby/Main.java:89:in `run'
       from org/jruby/Main.java:80:in `main'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:23:in `initialize'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from C:/DOCUME~1/HADOOP~1/hbase/bin/HBase.rb:37:in `initialize'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `new'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `admin'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:242:in `list'
       from (hbase):2:in `binding'hbase(main):002:0>
Also when I try to stop HBase with the command $ bin/stop-hbase.sh I got
no master to stop
<configuration>
<property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
My master starts working as HBase is now using local file system.
But when I want to use HDFS and change HBase-site configuration then my
master do not starts. Please tell me how I should configure so my HBase
master starts working and HBase use HDFS. Hope you people help me in
this.
-Aseem
Jean-Daniel Cryans
2009-03-18 15:05:55 UTC
Permalink
Aseem,

I'm glad it solved your problem.

I'm not familiar with this plugin, maybe someone else on this mailing
list is or you should probably write to the Hadoop mailing list.

J-D
Post by Puri, Aseem
Thanks Jean, my problem is solved now.
I have removed all old data and then restarted my Hadoop server then my datanode starts. Also after that my HBase master starts working.
I am using Hadoop - HBase 0.18 bcoz my eclipse supports hadoop-0.18.0-eclipse-plugin.
       When I switch to Hadoop-HBase 0.19 and use hadoop-0.19.0-eclipse-plugin then my eclipse doesn't show mapreduce perspective. I am using Eclipse Platform (GANYMEDE), Version: 3.4.1.
Can you tell which version of eclipse supports Hadoop - HBase 0.19 and can use hadoop-0.19.0-eclipse-plugin?
Thanks & Regards
Aseem Puri
-----Original Message-----
Sent: Wednesday, March 18, 2009 6:11 PM
Subject: Re: problem in configuring hbase with hdfs
Aseem,
What happened here is that your master was able to create the
namespace on the namenode but then wasn't able to write the files
because no datanodes are alive (then it tries to read and it failed
because the file was empty).
That storage version exception means that you probably at some point
formatted your namenode but didn't delete the datanode's data. Do
that, start dfs, look for any exceptions (should be cleared but just
in case) then start HBase. You should probably reformat your namenode
first too.
Any reason why you are using 0.18?
J-D
Post by Puri, Aseem
Hi
Wed Mar 18 09:10:51 IST 2009 Starting master on ie11dtxpficbfise
java version "1.6.0_11"
Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
Java HotSpot(TM) Client VM (build 11.0-b16, mixed mode, sharing)
ulimit -n 256
2009-03-18 09:10:55,650 INFO org.apache.hadoop.hbase.master.HMaster: Root region dir: hdfs://localhost:9000/hbase/-ROOT-/70236052
java.io.EOFException
       at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
       at java.io.DataInputStream.readUTF(DataInputStream.java:572)
       at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:101)
       at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:120)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:203)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
       at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
2009-03-18 09:10:55,806 ERROR org.apache.hadoop.hbase.master.HMaster: Can not start master
java.lang.reflect.InvocationTargetException
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
       at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
Caused by: java.io.EOFException
       at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
       at java.io.DataInputStream.readUTF(DataInputStream.java:572)
       at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:101)
       at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:120)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:203)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
       ... 6 more
Also when I start my start my hadoop server datanode not started it also
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ie11dtxpficbfise/199.63.66.65
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.18.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 686010; compiled by 'hadoopqa' on Thu Aug 14 19:48:33 UTC 2008
************************************************************/
2009-03-18 09:08:56,667 ERROR org.apache.hadoop.dfs.DataNode: org.apache.hadoop.dfs.IncorrectVersionException: Unexpected version of storage directory C:\tmp\hadoop-HadoopAdmin\dfs\data. Reported: -18. Expecting = -16.
       at org.apache.hadoop.dfs.Storage.getFields(Storage.java:584)
       at org.apache.hadoop.dfs.DataStorage.getFields(DataStorage.java:171)
       at org.apache.hadoop.dfs.Storage$StorageDirectory.read(Storage.java:164)
       at org.apache.hadoop.dfs.Storage$StorageDirectory.read(Storage.java:153)
       at org.apache.hadoop.dfs.DataStorage.doTransition(DataStorage.java:221)
       at org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:141)
       at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:273)
       at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
       at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
       at org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942)
       at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
       at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ie11dtxpficbfise/199.63.66.65
************************************************************/
So is there any relation of Hbase master with hadoop datanode exceptions that's why it do not start?
Please tell where is the problem bcoz of it Hbase master do not start
Thanks & Regards
Aseem Puri
-----Original Message-----
Sent: Tuesday, March 17, 2009 5:26 PM
Subject: Re: problem in configuring hbase with hdfs
Aseem,
It tells you that there is no master to stop so it means that
something went wrong when it got started and shut down by itself. Can
you look in your master log and see if there are any exceptions
thrown?
Thx,
J-D
Post by Puri, Aseem
Hi
I am newbie working on Hadoop - HBase. I am using Hadoop-0.18.0 and
HBase-0.18.1. There is some problem with using HBase master when HBase
<configuration>
 <property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:9000</value>
 </property>
 <property>
   <name>mapred.job.tracker</name>
   <value>localhost:9001</value>
 </property>
 <property>
   <name>dfs.replication</name>
   <value>1</value>
 </property>
</configuration>
<configuration>
 <property>
   <name>hbase.rootdir</name>
   <value>hdfs://localhost:9000/hbase</value>
   <description>The directory shared by region servers.
   </description>
 </property>
 <property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
$ bin/hadoop dfs -ls /
Found 2 items
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:22 /hbase
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:21 /tmp
But when I use HBase command list in $ bin/hbase shell I got following
hbase(main):001:0> list
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:29 INFO client.HConnectionManager$TableServers: Attempt 0
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:38 INFO client.HConnectionManager$TableServers: Attempt 1
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:48 INFO client.HConnectionManager$TableServers: Attempt 2
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:57 INFO client.HConnectionManager$TableServers: Attempt 3
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 4000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
localhost:60
000
       from
org/apache/hadoop/hbase/client/HConnectionManager.java:221:in `getM
aster'
       from org/apache/hadoop/hbase/client/HBaseAdmin.java:67:in
`<init>'
       from sun/reflect/NativeConstructorAccessorImpl.java:-2:in
`newInstance0'
       from sun/reflect/NativeConstructorAccessorImpl.java:39:in
`newInstance'
       from sun/reflect/DelegatingConstructorAccessorImpl.java:27:in
`newInstan
ce'
       from java/lang/reflect/Constructor.java:513:in `newInstance'
       from org/jruby/javasupport/JavaConstructor.java:195:in
`new_instance'
       from
org.jruby.javasupport.JavaConstructorInvoker$new_instance_method_0_
0:-1:in `call'
       from org/jruby/runtime/CallSite.java:261:in `call'
       from org/jruby/evaluator/ASTInterpreter.java:670:in `callNode'
       from org/jruby/evaluator/ASTInterpreter.java:324:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:2173:in `setupArgs'
       from org/jruby/evaluator/ASTInterpreter.java:571:in
`attrAssignNode'
       from org/jruby/evaluator/ASTInterpreter.java:309:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:620:in `blockNode'
       from org/jruby/evaluator/ASTInterpreter.java:318:in
`evalInternal'
... 178 levels...
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `__file__'
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `load'
       from org/jruby/Ruby.java:512:in `runScript'
       from org/jruby/Ruby.java:432:in `runNormally'
       from org/jruby/Ruby.java:312:in `runFromMain'
       from org/jruby/Main.java:144:in `run'
       from org/jruby/Main.java:89:in `run'
       from org/jruby/Main.java:80:in `main'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:23:in `initialize'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from C:/DOCUME~1/HADOOP~1/hbase/bin/HBase.rb:37:in `initialize'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `new'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `admin'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:242:in `list'
       from (hbase):2:in `binding'hbase(main):002:0>
Also when I try to stop HBase with the command $ bin/stop-hbase.sh I got
no master to stop
<configuration>
<property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
My master starts working as HBase is now using local file system.
But when I want to use HDFS and change HBase-site configuration then my
master do not starts. Please tell me how I should configure so my HBase
master starts working and HBase use HDFS. Hope you people help me in
this.
-Aseem
Puri, Aseem
2009-03-18 15:11:37 UTC
Permalink
Jean,

Yup I will put this question on Hadoop mailing list. Anyways, thank u again 4 helping me.

Thanks & Regards
Aseem Puri


-----Original Message-----
From: jdcryans-***@public.gmane.org [mailto:jdcryans-***@public.gmane.org] On Behalf Of Jean-Daniel Cryans
Sent: Wednesday, March 18, 2009 8:36 PM
To: hbase-user-7ArZoLwFLBtd/SJB6HiN2Ni2O/***@public.gmane.org
Subject: Re: problem in configuring hbase with hdfs

Aseem,

I'm glad it solved your problem.

I'm not familiar with this plugin, maybe someone else on this mailing
list is or you should probably write to the Hadoop mailing list.

J-D
Post by Puri, Aseem
Thanks Jean, my problem is solved now.
I have removed all old data and then restarted my Hadoop server then my datanode starts. Also after that my HBase master starts working.
I am using Hadoop - HBase 0.18 bcoz my eclipse supports hadoop-0.18.0-eclipse-plugin.
       When I switch to Hadoop-HBase 0.19 and use hadoop-0.19.0-eclipse-plugin then my eclipse doesn't show mapreduce perspective. I am using Eclipse Platform (GANYMEDE), Version: 3.4.1.
Can you tell which version of eclipse supports Hadoop - HBase 0.19 and can use hadoop-0.19.0-eclipse-plugin?
Thanks & Regards
Aseem Puri
-----Original Message-----
Sent: Wednesday, March 18, 2009 6:11 PM
Subject: Re: problem in configuring hbase with hdfs
Aseem,
What happened here is that your master was able to create the
namespace on the namenode but then wasn't able to write the files
because no datanodes are alive (then it tries to read and it failed
because the file was empty).
That storage version exception means that you probably at some point
formatted your namenode but didn't delete the datanode's data. Do
that, start dfs, look for any exceptions (should be cleared but just
in case) then start HBase. You should probably reformat your namenode
first too.
Any reason why you are using 0.18?
J-D
Post by Puri, Aseem
Hi
Wed Mar 18 09:10:51 IST 2009 Starting master on ie11dtxpficbfise
java version "1.6.0_11"
Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
Java HotSpot(TM) Client VM (build 11.0-b16, mixed mode, sharing)
ulimit -n 256
2009-03-18 09:10:55,650 INFO org.apache.hadoop.hbase.master.HMaster: Root region dir: hdfs://localhost:9000/hbase/-ROOT-/70236052
java.io.EOFException
       at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
       at java.io.DataInputStream.readUTF(DataInputStream.java:572)
       at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:101)
       at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:120)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:203)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
       at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
2009-03-18 09:10:55,806 ERROR org.apache.hadoop.hbase.master.HMaster: Can not start master
java.lang.reflect.InvocationTargetException
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
       at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:784)
       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:818)
Caused by: java.io.EOFException
       at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:323)
       at java.io.DataInputStream.readUTF(DataInputStream.java:572)
       at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:101)
       at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:120)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:203)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:147)
       ... 6 more
Also when I start my start my hadoop server datanode not started it also
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = ie11dtxpficbfise/199.63.66.65
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.18.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18 -r 686010; compiled by 'hadoopqa' on Thu Aug 14 19:48:33 UTC 2008
************************************************************/
2009-03-18 09:08:56,667 ERROR org.apache.hadoop.dfs.DataNode: org.apache.hadoop.dfs.IncorrectVersionException: Unexpected version of storage directory C:\tmp\hadoop-HadoopAdmin\dfs\data. Reported: -18. Expecting = -16.
       at org.apache.hadoop.dfs.Storage.getFields(Storage.java:584)
       at org.apache.hadoop.dfs.DataStorage.getFields(DataStorage.java:171)
       at org.apache.hadoop.dfs.Storage$StorageDirectory.read(Storage.java:164)
       at org.apache.hadoop.dfs.Storage$StorageDirectory.read(Storage.java:153)
       at org.apache.hadoop.dfs.DataStorage.doTransition(DataStorage.java:221)
       at org.apache.hadoop.dfs.DataStorage.recoverTransitionRead(DataStorage.java:141)
       at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:273)
       at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:190)
       at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:2987)
       at org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2942)
       at org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2950)
       at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3072)
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ie11dtxpficbfise/199.63.66.65
************************************************************/
So is there any relation of Hbase master with hadoop datanode exceptions that's why it do not start?
Please tell where is the problem bcoz of it Hbase master do not start
Thanks & Regards
Aseem Puri
-----Original Message-----
Sent: Tuesday, March 17, 2009 5:26 PM
Subject: Re: problem in configuring hbase with hdfs
Aseem,
It tells you that there is no master to stop so it means that
something went wrong when it got started and shut down by itself. Can
you look in your master log and see if there are any exceptions
thrown?
Thx,
J-D
Post by Puri, Aseem
Hi
I am newbie working on Hadoop - HBase. I am using Hadoop-0.18.0 and
HBase-0.18.1. There is some problem with using HBase master when HBase
<configuration>
 <property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:9000</value>
 </property>
 <property>
   <name>mapred.job.tracker</name>
   <value>localhost:9001</value>
 </property>
 <property>
   <name>dfs.replication</name>
   <value>1</value>
 </property>
</configuration>
<configuration>
 <property>
   <name>hbase.rootdir</name>
   <value>hdfs://localhost:9000/hbase</value>
   <description>The directory shared by region servers.
   </description>
 </property>
 <property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
$ bin/hadoop dfs -ls /
Found 2 items
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:22 /hbase
drwxr-xr-x   - HadoopAdmin supergroup          0 2009-03-17 12:21 /tmp
But when I use HBase command list in $ bin/hbase shell I got following
hbase(main):001:0> list
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:29 INFO client.HConnectionManager$TableServers: Attempt 0
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:38 INFO client.HConnectionManager$TableServers: Attempt 1
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:48 INFO client.HConnectionManager$TableServers: Attempt 2
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 2000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
09/03/17 12:24:57 INFO client.HConnectionManager$TableServers: Attempt 3
of 5 fa
iled with <java.io.IOException: Call failed on local exception>.
Retrying after
sleep of 4000
localhost/127.0.0
.1:60000. Already tried 0 time(s).
localhost/127.0.0
.1:60000. Already tried 1 time(s).
localhost/127.0.0
.1:60000. Already tried 2 time(s).
localhost:60
000
       from
org/apache/hadoop/hbase/client/HConnectionManager.java:221:in `getM
aster'
       from org/apache/hadoop/hbase/client/HBaseAdmin.java:67:in
`<init>'
       from sun/reflect/NativeConstructorAccessorImpl.java:-2:in
`newInstance0'
       from sun/reflect/NativeConstructorAccessorImpl.java:39:in
`newInstance'
       from sun/reflect/DelegatingConstructorAccessorImpl.java:27:in
`newInstan
ce'
       from java/lang/reflect/Constructor.java:513:in `newInstance'
       from org/jruby/javasupport/JavaConstructor.java:195:in
`new_instance'
       from
org.jruby.javasupport.JavaConstructorInvoker$new_instance_method_0_
0:-1:in `call'
       from org/jruby/runtime/CallSite.java:261:in `call'
       from org/jruby/evaluator/ASTInterpreter.java:670:in `callNode'
       from org/jruby/evaluator/ASTInterpreter.java:324:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:2173:in `setupArgs'
       from org/jruby/evaluator/ASTInterpreter.java:571:in
`attrAssignNode'
       from org/jruby/evaluator/ASTInterpreter.java:309:in
`evalInternal'
       from org/jruby/evaluator/ASTInterpreter.java:620:in `blockNode'
       from org/jruby/evaluator/ASTInterpreter.java:318:in
`evalInternal'
... 178 levels...
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `__file__'
       from
ruby/C_3a_/Documents_20_and_20_Settings/HadoopAdmin/hbase/bin/C:\DO
CUME~1\HADOOP~1\hbase\/bin/hirb.rb:-1:in `load'
       from org/jruby/Ruby.java:512:in `runScript'
       from org/jruby/Ruby.java:432:in `runNormally'
       from org/jruby/Ruby.java:312:in `runFromMain'
       from org/jruby/Main.java:144:in `run'
       from org/jruby/Main.java:89:in `run'
       from org/jruby/Main.java:80:in `main'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:23:in `initialize'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from file:/C:/Documents and
Settings/HadoopAdmin/hbase/lib/jruby-complet
e-1.1.2.jar!/builtin/javasupport/proxy/concrete.rb:6:in `new'
       from C:/DOCUME~1/HADOOP~1/hbase/bin/HBase.rb:37:in `initialize'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `new'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:218:in `admin'
       from C:\DOCUME~1\HADOOP~1\hbase\/bin/hirb.rb:242:in `list'
       from (hbase):2:in `binding'hbase(main):002:0>
Also when I try to stop HBase with the command $ bin/stop-hbase.sh I got
no master to stop
<configuration>
<property>
   <name>hbase.master</name>
   <value>localhost:60000</value>
   <description>The host and port that the HBase master runs at.
   </description>
 </property>
<property>
   <name>hbase.regionserver</name>
   <value>localhost:60020</value>
   <description>The host and port a HBase region server runs at.
   </description>
 </property>
 </configuration>
My master starts working as HBase is now using local file system.
But when I want to use HDFS and change HBase-site configuration then my
master do not starts. Please tell me how I should configure so my HBase
master starts working and HBase use HDFS. Hope you people help me in
this.
-Aseem
Continue reading on narkive:
Loading...