Discussion:
hbase standalone cannot start master, cannot assign requested address at port 60000
Michael Scott
2010-09-14 05:16:44 UTC
Permalink
Hi,

I am trying to install a standalone hbase server on Fedora Core 11. I have
hadoop running:

bash-4.0$ jps
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
30987 TaskTracker
31137 Jps

The only edit I have made to the hbase-0.20.6 directory from the tarball is
to point to the Java installation (the same as used by hadoop):
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/

I have verified sshd passwordless login for hadoop for all variations of the
hostname (localhost, qualifiedname.com, www.qualifiedname.com, straight IP
address), and have added the qualified hostnames to /etc/hosts just to be
sure.

When I attempt to start the hbase server with start-hbase.sh (as hadoop) the
following appears in the log file:

2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster: My
address is qualifiedname.com:60000
2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster: Can
not start master
java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
assign requested address
at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
at
org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
... 9 more

At this point zookeeper is apparently running, but hbase master is not:
bash-4.0$ jps
31454 HQuorumPeer
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
31670 Jps
30987 TaskTracker

I am stumped -- the documentation simply says that the standalone server
should work out of the box, and it would seem to me that hadoop. Does
anyone have any suggestions here? Thanks in advance!

Michael

Michael
Ryan Rawson
2010-09-14 05:22:31 UTC
Permalink
you can use:

netstat -anp

to figure out which process is using port 60000.

-ryan
Hi,
I am trying to install a standalone hbase server on Fedora Core 11.  I have
bash-4.0$ jps
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
30987 TaskTracker
31137 Jps
The only edit I have made to the hbase-0.20.6 directory from the tarball is
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
I have verified sshd passwordless login for hadoop for all variations of the
hostname (localhost, qualifiedname.com, www.qualifiedname.com, straight IP
address), and have added the qualified hostnames to /etc/hosts just to be
sure.
When I attempt to start the hbase server with start-hbase.sh (as hadoop) the
2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster: My
address is qualifiedname.com:60000
2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster: Can
not start master
java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
assign requested address
       at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
       at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
       at
org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
       at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
       at org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
       at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
       at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
       at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
Caused by: java.net.BindException: Cannot assign requested address
       at sun.nio.ch.Net.bind(Native Method)
       at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
       at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
       at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
       ... 9 more
bash-4.0$ jps
31454 HQuorumPeer
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
31670 Jps
30987 TaskTracker
I am stumped -- the documentation simply says that the standalone server
should work out of the box, and it would seem to me that  hadoop.  Does
anyone have any suggestions here?  Thanks in advance!
Michael
Michael
Michael Scott
2010-09-14 05:36:52 UTC
Permalink
I wish it were so, but no port 600XX is in use:

[root]# netstat -anp | grep 600
unix 3 [ ] STREAM CONNECTED 8600 1480/avahi-daemon:


thanks,
Michael
Post by Ryan Rawson
netstat -anp
to figure out which process is using port 60000.
-ryan
Post by Michael Scott
Hi,
I am trying to install a standalone hbase server on Fedora Core 11. I
have
Post by Michael Scott
bash-4.0$ jps
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
30987 TaskTracker
31137 Jps
The only edit I have made to the hbase-0.20.6 directory from the tarball
is
Post by Michael Scott
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
I have verified sshd passwordless login for hadoop for all variations of
the
Post by Michael Scott
hostname (localhost, qualifiedname.com, www.qualifiedname.com, straight
IP
Post by Michael Scott
address), and have added the qualified hostnames to /etc/hosts just to be
sure.
When I attempt to start the hbase server with start-hbase.sh (as hadoop)
the
Post by Michael Scott
2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster: My
address is qualifiedname.com:60000
2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster: Can
not start master
java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
assign requested address
at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
Post by Michael Scott
at
org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
at
org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
Post by Michael Scott
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
Post by Michael Scott
at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
Post by Michael Scott
at
org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
Post by Michael Scott
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
Post by Michael Scott
at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
... 9 more
bash-4.0$ jps
31454 HQuorumPeer
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
31670 Jps
30987 TaskTracker
I am stumped -- the documentation simply says that the standalone server
should work out of the box, and it would seem to me that hadoop. Does
anyone have any suggestions here? Thanks in advance!
Michael
Michael
Ryan Rawson
2010-09-14 05:41:40 UTC
Permalink
dur my mistake look at this line:

java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot

do you have an interface for that IP?

we use the hostname to find the IP and then bind to that IP.

-ryan
Post by Michael Scott
[root]# netstat -anp | grep 600
thanks,
Michael
Post by Ryan Rawson
netstat -anp
to figure out which process is using port 60000.
-ryan
Hi,
I am trying to install a standalone hbase server on Fedora Core 11.  I
have
bash-4.0$ jps
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
30987 TaskTracker
31137 Jps
The only edit I have made to the hbase-0.20.6 directory from the tarball
is
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
I have verified sshd passwordless login for hadoop for all variations of
the
hostname (localhost, qualifiedname.com, www.qualifiedname.com, straight
IP
address), and have added the qualified hostnames to /etc/hosts just to be
sure.
When I attempt to start the hbase server with start-hbase.sh (as hadoop)
the
2010-09-14 00:36:45,555 INFO org.apache.hadoop.hbase.master.HMaster: My
address is qualifiedname.com:60000
2010-09-14 00:36:45,682 ERROR org.apache.hadoop.hbase.master.HMaster: Can
not start master
java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
assign requested address
       at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
       at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
       at
org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
       at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
       at
org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
       at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
       at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
       at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
       at
org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
       at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
Caused by: java.net.BindException: Cannot assign requested address
       at sun.nio.ch.Net.bind(Native Method)
       at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
       at
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
       at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
       ... 9 more
bash-4.0$ jps
31454 HQuorumPeer
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
31670 Jps
30987 TaskTracker
I am stumped -- the documentation simply says that the standalone server
should work out of the box, and it would seem to me that  hadoop.  Does
anyone have any suggestions here?  Thanks in advance!
Michael
Michael
Michael Scott
2010-09-14 05:50:59 UTC
Permalink
The IP is a static address through comcast, and we point gslbiotech.com to
it as well (http works with hostname or IP number, so I think the IP
interface is live). I don't know if that leading / means anything. Note
that hadoop binds just fine to the 500XX ports on that IP.

Michael
Post by Michael Scott
java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
do you have an interface for that IP?
we use the hostname to find the IP and then bind to that IP.
-ryan
Post by Michael Scott
[root]# netstat -anp | grep 600
unix 3 [ ] STREAM CONNECTED 8600
thanks,
Michael
Post by Ryan Rawson
netstat -anp
to figure out which process is using port 60000.
-ryan
Post by Michael Scott
Hi,
I am trying to install a standalone hbase server on Fedora Core 11. I
have
Post by Michael Scott
bash-4.0$ jps
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
30987 TaskTracker
31137 Jps
The only edit I have made to the hbase-0.20.6 directory from the
tarball
Post by Michael Scott
Post by Ryan Rawson
is
Post by Michael Scott
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
I have verified sshd passwordless login for hadoop for all variations
of
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
hostname (localhost, qualifiedname.com, www.qualifiedname.com,
straight
Post by Michael Scott
Post by Ryan Rawson
IP
Post by Michael Scott
address), and have added the qualified hostnames to /etc/hosts just to
be
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
sure.
When I attempt to start the hbase server with start-hbase.sh (as
hadoop)
Post by Michael Scott
Post by Ryan Rawson
the
My
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
address is qualifiedname.com:60000
Can
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
not start master
Cannot
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
assign requested address
at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
at
org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
Post by Michael Scott
at
org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
Post by Michael Scott
at
org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
Post by Michael Scott
at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
... 9 more
At this point zookeeper is apparently running, but hbase master is
bash-4.0$ jps
31454 HQuorumPeer
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
31670 Jps
30987 TaskTracker
I am stumped -- the documentation simply says that the standalone
server
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
should work out of the box, and it would seem to me that hadoop.
Does
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
anyone have any suggestions here? Thanks in advance!
Michael
Michael
Ryan Rawson
2010-09-14 06:11:33 UTC
Permalink
i wouldnt expose either hadoop or hbase to the outside world! It's
pretty trivial to oom a server with data to the port. hardening the
port just hasnt been a priority yet.

but the log message suggests either a port issue, or a IP issue...
perhaps you can dig a little bit more and let us know what you find?

-ryan
Post by Michael Scott
The IP is a static address through comcast, and we point gslbiotech.com to
it as well (http works with hostname or IP number, so I think the IP
interface is live).  I don't know if that leading / means anything.  Note
that hadoop binds just fine to the 500XX ports on that IP.
Michael
Post by Michael Scott
java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
do you have an interface for that IP?
we use the hostname to find the IP and then bind to that IP.
-ryan
Post by Michael Scott
[root]# netstat -anp | grep 600
unix  3      [ ]         STREAM     CONNECTED     8600
thanks,
Michael
Post by Ryan Rawson
netstat -anp
to figure out which process is using port 60000.
-ryan
Hi,
I am trying to install a standalone hbase server on Fedora Core 11.  I
have
bash-4.0$ jps
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
30987 TaskTracker
31137 Jps
The only edit I have made to the hbase-0.20.6 directory from the
tarball
Post by Michael Scott
Post by Ryan Rawson
is
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
I have verified sshd passwordless login for hadoop for all variations
of
Post by Michael Scott
Post by Ryan Rawson
the
hostname (localhost, qualifiedname.com, www.qualifiedname.com,
straight
Post by Michael Scott
Post by Ryan Rawson
IP
address), and have added the qualified hostnames to /etc/hosts just to
be
Post by Michael Scott
Post by Ryan Rawson
sure.
When I attempt to start the hbase server with start-hbase.sh (as
hadoop)
Post by Michael Scott
Post by Ryan Rawson
the
My
Post by Michael Scott
Post by Ryan Rawson
address is qualifiedname.com:60000
Can
Post by Michael Scott
Post by Ryan Rawson
not start master
Cannot
Post by Michael Scott
Post by Ryan Rawson
assign requested address
       at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
       at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
Post by Michael Scott
Post by Ryan Rawson
       at
org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
       at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
       at
org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
       at
org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
Post by Michael Scott
Post by Ryan Rawson
       at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
Post by Michael Scott
Post by Ryan Rawson
       at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
Post by Michael Scott
Post by Ryan Rawson
       at
org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
       at
org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
Post by Michael Scott
Post by Ryan Rawson
Caused by: java.net.BindException: Cannot assign requested address
       at sun.nio.ch.Net.bind(Native Method)
       at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
Post by Michael Scott
Post by Ryan Rawson
       at
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
       at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
       ... 9 more
At this point zookeeper is apparently running, but hbase master is
bash-4.0$ jps
31454 HQuorumPeer
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
31670 Jps
30987 TaskTracker
I am stumped -- the documentation simply says that the standalone
server
Post by Michael Scott
Post by Ryan Rawson
should work out of the box, and it would seem to me that  hadoop.
 Does
Post by Michael Scott
Post by Ryan Rawson
anyone have any suggestions here?  Thanks in advance!
Michael
Michael
Michael Scott
2010-09-14 16:33:41 UTC
Permalink
Thanks again. Don't worry, we're not exposing to the outside world, I was
just clarifying that the IP address exists and takes connections, both
internal and external, on other ports. I will see if I can figure out why
it is choking on the 60000 port. I'm not much of on expert on this, I know
to look in netstat but that's about it. I'll report back with what I find.
I have tried editing the port number in hbase-site.xml, and it won't bind to
any other port either. The error message for 60000 and other non-privileged
ports not in use is the same as the error message I get if I try to bind to
a port that I know is taken, like 50010. If I try a low-numbered privileged
port then I get "Permission denied" instead. I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with hadoop off
and binding to 50010, which hadoop uses).

Michael
Post by Ryan Rawson
i wouldnt expose either hadoop or hbase to the outside world! It's
pretty trivial to oom a server with data to the port. hardening the
port just hasnt been a priority yet.
but the log message suggests either a port issue, or a IP issue...
perhaps you can dig a little bit more and let us know what you find?
-ryan
The IP is a static address through comcast, and we point gslbiotech.comto
it as well (http works with hostname or IP number, so I think the IP
interface is live). I don't know if that leading / means anything. Note
that hadoop binds just fine to the 500XX ports on that IP.
Michael
Post by Michael Scott
java.net.BindException: Problem binding to /97.86.88.18:60000 : Cannot
do you have an interface for that IP?
we use the hostname to find the IP and then bind to that IP.
-ryan
Post by Michael Scott
[root]# netstat -anp | grep 600
unix 3 [ ] STREAM CONNECTED 8600
thanks,
Michael
Post by Ryan Rawson
netstat -anp
to figure out which process is using port 60000.
-ryan
On Mon, Sep 13, 2010 at 10:16 PM, Michael Scott <
Post by Michael Scott
Hi,
I am trying to install a standalone hbase server on Fedora Core 11.
I
Post by Michael Scott
Post by Michael Scott
Post by Ryan Rawson
have
Post by Michael Scott
bash-4.0$ jps
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
30987 TaskTracker
31137 Jps
The only edit I have made to the hbase-0.20.6 directory from the
tarball
Post by Michael Scott
Post by Ryan Rawson
is
Post by Michael Scott
export JAVA_HOME=/usr/lib/jvm/java-1.6.0-sun/
I have verified sshd passwordless login for hadoop for all
variations
Post by Michael Scott
of
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
hostname (localhost, qualifiedname.com, www.qualifiedname.com,
straight
Post by Michael Scott
Post by Ryan Rawson
IP
Post by Michael Scott
address), and have added the qualified hostnames to /etc/hosts just
to
Post by Michael Scott
be
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
sure.
When I attempt to start the hbase server with start-hbase.sh (as
hadoop)
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
2010-09-14 00:36:45,555 INFO
My
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
address is qualifiedname.com:60000
2010-09-14 00:36:45,682 ERROR
Can
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
not start master
Cannot
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
assign requested address
at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:179)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.<init>(HBaseServer.java:242)
Post by Michael Scott
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
org.apache.hadoop.hbase.ipc.HBaseServer.<init>(HBaseServer.java:998)
Post by Michael Scott
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
org.apache.hadoop.hbase.ipc.HBaseRPC$Server.<init>(HBaseRPC.java:637)
Post by Michael Scott
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
org.apache.hadoop.hbase.ipc.HBaseRPC.getServer(HBaseRPC.java:596)
Post by Michael Scott
at
org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:224)
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
Post by Michael Scott
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
Post by Michael Scott
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
Post by Michael Scott
at
org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
Post by Michael Scott
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
at
sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
Post by Michael Scott
at
org.apache.hadoop.hbase.ipc.HBaseServer.bind(HBaseServer.java:177)
... 9 more
At this point zookeeper is apparently running, but hbase master is
bash-4.0$ jps
31454 HQuorumPeer
30908 JobTracker
30631 NameNode
30824 SecondaryNameNode
30731 DataNode
31670 Jps
30987 TaskTracker
I am stumped -- the documentation simply says that the standalone
server
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
should work out of the box, and it would seem to me that hadoop.
Does
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
anyone have any suggestions here? Thanks in advance!
Michael
Michael
Stack
2010-09-14 16:41:06 UTC
Permalink
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with hadoop off
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd. We hadoop
their mechanism essentially).

St.Ack
Michael Scott
2010-09-14 17:17:32 UTC
Permalink
That's correct. I tried a number of different ports to see if there was
something weird, and then I shut down the hadoop server and tried to connect
to 50010 (which of course should have been free at that point) but got the
same "cannot assign to requested address" error. If I start hadoop, netstat
shows a process listening on 50010.

I am going to try this on a different OS, I am wondering if FC11 is my
problem.

Michael
Post by Stack
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with hadoop off
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd. We hadoop
their mechanism essentially).
St.Ack
Todd Lipcon
2010-09-14 17:23:57 UTC
Permalink
Hi Michael,

It might be related to IPV6. Do you have IPV6 enabled on this machine?

Check out this hadoop JIRA that might be related for some tips:
https://issues.apache.org/jira/browse/HADOOP-6056

<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
Post by Michael Scott
That's correct. I tried a number of different ports to see if there was
something weird, and then I shut down the hadoop server and tried to connect
to 50010 (which of course should have been free at that point) but got the
same "cannot assign to requested address" error. If I start hadoop, netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if FC11 is my
problem.
Michael
Post by Stack
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with hadoop
off
Post by Stack
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd. We hadoop
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
Michael Scott
2010-09-15 05:42:11 UTC
Permalink
Hi again,

IPV6 was enabled. I shut it off, rebooted to be sure, verified it was still
off, and encountered the same problem once again.

I also tried to open port 60000 by hand with a small php file. I can do
this (as any user) for localhost. I can NOT do this (not even as root) for
the IP address which matches the fully qualified domain name, which is what
hbase is trying to use. Is there some way for me to configure hbase to use
localhost instead of the fully qualified domain name for the master? I
would have thought this was done by default, or that there would be an
obvious line in some conf file, but I can't find it.

Thanks again,

Michael
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on this machine?
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
Post by Michael Scott
That's correct. I tried a number of different ports to see if there was
something weird, and then I shut down the hadoop server and tried to connect
to 50010 (which of course should have been free at that point) but got
the
Post by Michael Scott
same "cannot assign to requested address" error. If I start hadoop, netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if FC11 is my
problem.
Michael
Post by Stack
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with hadoop
off
Post by Stack
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd. We hadoop
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
Michael Scott
2010-09-15 17:18:28 UTC
Permalink
Hi again,

I think the hbase server master is not starting because it is attempting to
open port 60000 on its public IP address, rather than using localhost. I
cannot seem to figure out how to force it (well, configure it) to attempt to
bind to localhost:60000 instead. As far as I can see, this is set in the
file:

org/apache/hadoop/hbase/master/HMaster.java

I don't know much about java, so I'd prefer not to edit the source if there
is an option, but I will if necessary. Can someone please point me to the
way to change this setting? Any help would be greatly appreciated.

Thanks,
Michael
Post by Michael Scott
Hi again,
IPV6 was enabled. I shut it off, rebooted to be sure, verified it was
still off, and encountered the same problem once again.
I also tried to open port 60000 by hand with a small php file. I can do
this (as any user) for localhost. I can NOT do this (not even as root) for
the IP address which matches the fully qualified domain name, which is what
hbase is trying to use. Is there some way for me to configure hbase to use
localhost instead of the fully qualified domain name for the master? I
would have thought this was done by default, or that there would be an
obvious line in some conf file, but I can't find it.
Thanks again,
Michael
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on this machine?
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
Post by Michael Scott
That's correct. I tried a number of different ports to see if there was
something weird, and then I shut down the hadoop server and tried to connect
to 50010 (which of course should have been free at that point) but got
the
Post by Michael Scott
same "cannot assign to requested address" error. If I start hadoop, netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if FC11 is my
problem.
Michael
Post by Stack
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with
hadoop
Post by Michael Scott
off
Post by Stack
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd. We hadoop
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
Ryan Rawson
2010-09-15 23:04:47 UTC
Permalink
Hey,

If you bind to localhost you wont actually be reachable by anyone!

The question is why is your OS disallowing binds to a specific
interface/port combo?

HBase does not really run in a blended/multihomed environment...
meaning if you have multiple interfaces, you have to choose one that
we work over. This is because we need to know a singular canonical
IP/name for any given server because we put that info up inside
ZooKeeper and META tables. So it's not just an artificial constraint,
but exists for cluster management needs.

Having said that, we do work on multihomed machines, eg: ec2, you
might bind hbase to the internal interface taking advantage of the
unmetered/faster network. Also better for security as well.

Let us know if you need more background on how we use the network and why.
-ryan
Post by Michael Scott
Hi again,
I think the hbase server master is not starting because it is attempting to
open port 60000 on its public IP address, rather than using localhost.  I
cannot seem to figure out how to force it (well, configure it) to attempt to
bind to localhost:60000 instead.  As far as I can see,  this is set in the
org/apache/hadoop/hbase/master/HMaster.java
I don't know much about java, so I'd prefer not to edit the source if there
is an option, but I will if necessary.  Can someone please point me to the
way to change this setting?  Any help would be greatly appreciated.
Thanks,
Michael
Post by Michael Scott
Hi again,
IPV6 was enabled.  I shut it off, rebooted to be sure, verified it was
still off, and encountered the same problem once again.
I also tried to open port 60000 by hand with a small php file.  I can do
this (as any user) for localhost.  I can NOT do this (not even as root) for
the IP address which matches the fully qualified domain name, which is what
hbase is trying to use.  Is there some way for me to configure hbase to use
localhost instead of the fully qualified domain name for the master?  I
would have thought this was done by default, or that there would be an
obvious line in some conf file, but I can't find it.
Thanks again,
Michael
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on this machine?
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
That's correct.  I tried a number of different ports to see if there was
something weird, and then I shut down the hadoop server and tried to connect
to 50010 (which of course should have been free at that point) but got
the
same "cannot assign to requested address" error.  If I start hadoop,
netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if FC11 is my
problem.
Michael
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with
hadoop
off
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase?  (Odd.  We hadoop
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
Michael Scott
2010-09-16 05:07:15 UTC
Permalink
Thanks for the continued advice. I am still confused by the different
behaviors of hadoop and hbase. As I said before, I can't get hbase to work
on any of the ports that hadoop works on, so I guess hadoop and hbase are
using different interfaces. Why is this, and can't I ask hbase to use the
interface that hadoop uses? What interfaces are hadoop and hbase using?

Also (and maybe this is the wrong forum for this question), how can I get my
OS to allow me to open 60000 using the IP address? I have temporarily
disabled selinux and iptables, as I thought that this would simply allow all
port connections. Still, this works just fine:
bash-4.0$ nc -l 60000 > /tmp/nc.out

but this does not:
bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
(returns "nc: Cannot assign requested address"; I get the same error for the
hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0 is
allowed)

I am trying to get hbase running for a socorro server, which will running
locally. I don't know if that matters.

Thanks,
Michael
Post by Ryan Rawson
Hey,
If you bind to localhost you wont actually be reachable by anyone!
The question is why is your OS disallowing binds to a specific
interface/port combo?
HBase does not really run in a blended/multihomed environment...
meaning if you have multiple interfaces, you have to choose one that
we work over. This is because we need to know a singular canonical
IP/name for any given server because we put that info up inside
ZooKeeper and META tables. So it's not just an artificial constraint,
but exists for cluster management needs.
Having said that, we do work on multihomed machines, eg: ec2, you
might bind hbase to the internal interface taking advantage of the
unmetered/faster network. Also better for security as well.
Let us know if you need more background on how we use the network and why.
-ryan
Post by Michael Scott
Hi again,
I think the hbase server master is not starting because it is attempting
to
Post by Michael Scott
open port 60000 on its public IP address, rather than using localhost. I
cannot seem to figure out how to force it (well, configure it) to attempt
to
Post by Michael Scott
bind to localhost:60000 instead. As far as I can see, this is set in
the
Post by Michael Scott
org/apache/hadoop/hbase/master/HMaster.java
I don't know much about java, so I'd prefer not to edit the source if
there
Post by Michael Scott
is an option, but I will if necessary. Can someone please point me to
the
Post by Michael Scott
way to change this setting? Any help would be greatly appreciated.
Thanks,
Michael
Post by Michael Scott
Hi again,
IPV6 was enabled. I shut it off, rebooted to be sure, verified it was
still off, and encountered the same problem once again.
I also tried to open port 60000 by hand with a small php file. I can do
this (as any user) for localhost. I can NOT do this (not even as root)
for
Post by Michael Scott
Post by Michael Scott
the IP address which matches the fully qualified domain name, which is
what
Post by Michael Scott
Post by Michael Scott
hbase is trying to use. Is there some way for me to configure hbase to
use
Post by Michael Scott
Post by Michael Scott
localhost instead of the fully qualified domain name for the master? I
would have thought this was done by default, or that there would be an
obvious line in some conf file, but I can't find it.
Thanks again,
Michael
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on this machine?
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
Post by Michael Scott
That's correct. I tried a number of different ports to see if there
was
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
something weird, and then I shut down the hadoop server and tried to connect
to 50010 (which of course should have been free at that point) but
got
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
the
Post by Michael Scott
same "cannot assign to requested address" error. If I start hadoop, netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if FC11 is
my
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
problem.
Michael
On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with
hadoop
Post by Michael Scott
off
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd. We hadoop
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
Ryan Rawson
2010-09-16 05:41:14 UTC
Permalink
What is your ifconfig output looking like?
Thanks for the continued advice.  I am still confused by the different
behaviors of hadoop and hbase. As I said before, I can't get hbase to work
on any of the ports that hadoop works on, so I guess hadoop and hbase are
using different interfaces.  Why is this, and can't I ask hbase to use the
interface that hadoop uses?  What interfaces are hadoop and hbase using?
Also (and maybe this is the wrong forum for this question), how can I get my
OS to allow me to open 60000 using the IP address?  I have temporarily
disabled selinux and iptables, as I thought that this would simply allow all
bash-4.0$ nc -l  60000 > /tmp/nc.out
bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
(returns "nc: Cannot assign requested address"; I get the same error for the
hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0 is
allowed)
I am trying to get hbase running for a socorro server, which will running
locally.  I don't know if that matters.
Thanks,
Michael
Post by Ryan Rawson
Hey,
If you bind to localhost you wont actually be reachable by anyone!
The question is why is your OS disallowing binds to a specific
interface/port combo?
HBase does not really run in a blended/multihomed environment...
meaning if you have multiple interfaces, you have to choose one that
we work over.  This is because we need to know a singular canonical
IP/name for any given server because we put that info up inside
ZooKeeper and META tables.  So it's not just an artificial constraint,
but exists for cluster management needs.
Having said that, we do work on multihomed machines, eg: ec2, you
might bind hbase to the internal interface taking advantage of the
unmetered/faster network. Also better for security as well.
Let us know if you need more background on how we use the network and why.
-ryan
Post by Michael Scott
Hi again,
I think the hbase server master is not starting because it is attempting
to
Post by Michael Scott
open port 60000 on its public IP address, rather than using localhost.  I
cannot seem to figure out how to force it (well, configure it) to attempt
to
Post by Michael Scott
bind to localhost:60000 instead.  As far as I can see,  this is set in
the
Post by Michael Scott
org/apache/hadoop/hbase/master/HMaster.java
I don't know much about java, so I'd prefer not to edit the source if
there
Post by Michael Scott
is an option, but I will if necessary.  Can someone please point me to
the
Post by Michael Scott
way to change this setting?  Any help would be greatly appreciated.
Thanks,
Michael
Post by Michael Scott
Hi again,
IPV6 was enabled.  I shut it off, rebooted to be sure, verified it was
still off, and encountered the same problem once again.
I also tried to open port 60000 by hand with a small php file.  I can do
this (as any user) for localhost.  I can NOT do this (not even as root)
for
Post by Michael Scott
Post by Michael Scott
the IP address which matches the fully qualified domain name, which is
what
Post by Michael Scott
Post by Michael Scott
hbase is trying to use.  Is there some way for me to configure hbase to
use
Post by Michael Scott
Post by Michael Scott
localhost instead of the fully qualified domain name for the master?  I
would have thought this was done by default, or that there would be an
obvious line in some conf file, but I can't find it.
Thanks again,
Michael
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on this machine?
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
That's correct.  I tried a number of different ports to see if there
was
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
something weird, and then I shut down the hadoop server and tried to connect
to 50010 (which of course should have been free at that point) but
got
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
the
same "cannot assign to requested address" error.  If I start hadoop,
netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if FC11 is
my
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
problem.
Michael
On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with
hadoop
off
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase?  (Odd.  We hadoop
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
N.N. Gesli
2010-09-16 06:12:40 UTC
Permalink
Hi Michael,

I was having a similar problem and following this thread for any
suggestions. I tried everything suggested and more.

I was trying to run Hadoop/Hbase pseudo distributed version on my Mac. I
initially started with Hadoop 21.0 and Hbase 0.89 versions. I had exactly
the same error that you were getting. Then switched to Hadoop 20.2 and Hbase
20.6 - still HMaster was not starting. Then finally it worked. Below are my
steps to success :)

* stopped hbase
* stopped hadoop
* run jps; RegionServer was still running; killed it manually
* in tmp directory (where hadoop namenode and *.pid files are stored) I
removed everything related to hadoop and hbase, including the directories.
(I had no data in Hadoop, so I could do this)
* changed the ports back to default 600**
* changed back Hadoop and Hbase configurations to "localhost" in *site*.xml
and regionservers. (Only I will be using this - no remote connection)
* changed back my /etc/hosts to the original version. It looks like this:
127.0.0.1 localhost
::1 localhost
fe80::1%lo0 localhost
* reformatted the Hadoop namenode
* started Hadoop
* started HBase and it worked :)

Let me know if you want to know any specific configuration.

N.Gesli
Post by Ryan Rawson
What is your ifconfig output looking like?
Post by Michael Scott
Thanks for the continued advice. I am still confused by the different
behaviors of hadoop and hbase. As I said before, I can't get hbase to
work
Post by Michael Scott
on any of the ports that hadoop works on, so I guess hadoop and hbase are
using different interfaces. Why is this, and can't I ask hbase to use
the
Post by Michael Scott
interface that hadoop uses? What interfaces are hadoop and hbase using?
Also (and maybe this is the wrong forum for this question), how can I get
my
Post by Michael Scott
OS to allow me to open 60000 using the IP address? I have temporarily
disabled selinux and iptables, as I thought that this would simply allow
all
Post by Michael Scott
bash-4.0$ nc -l 60000 > /tmp/nc.out
bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
(returns "nc: Cannot assign requested address"; I get the same error for
the
Post by Michael Scott
hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0 is
allowed)
I am trying to get hbase running for a socorro server, which will running
locally. I don't know if that matters.
Thanks,
Michael
Post by Ryan Rawson
Hey,
If you bind to localhost you wont actually be reachable by anyone!
The question is why is your OS disallowing binds to a specific
interface/port combo?
HBase does not really run in a blended/multihomed environment...
meaning if you have multiple interfaces, you have to choose one that
we work over. This is because we need to know a singular canonical
IP/name for any given server because we put that info up inside
ZooKeeper and META tables. So it's not just an artificial constraint,
but exists for cluster management needs.
Having said that, we do work on multihomed machines, eg: ec2, you
might bind hbase to the internal interface taking advantage of the
unmetered/faster network. Also better for security as well.
Let us know if you need more background on how we use the network and
why.
Post by Michael Scott
Post by Ryan Rawson
-ryan
Post by Michael Scott
Hi again,
I think the hbase server master is not starting because it is
attempting
Post by Michael Scott
Post by Ryan Rawson
to
Post by Michael Scott
open port 60000 on its public IP address, rather than using localhost.
I
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
cannot seem to figure out how to force it (well, configure it) to
attempt
Post by Michael Scott
Post by Ryan Rawson
to
Post by Michael Scott
bind to localhost:60000 instead. As far as I can see, this is set in
the
Post by Michael Scott
org/apache/hadoop/hbase/master/HMaster.java
I don't know much about java, so I'd prefer not to edit the source if
there
Post by Michael Scott
is an option, but I will if necessary. Can someone please point me to
the
Post by Michael Scott
way to change this setting? Any help would be greatly appreciated.
Thanks,
Michael
Post by Michael Scott
Hi again,
IPV6 was enabled. I shut it off, rebooted to be sure, verified it
was
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
still off, and encountered the same problem once again.
I also tried to open port 60000 by hand with a small php file. I can
do
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
this (as any user) for localhost. I can NOT do this (not even as
root)
Post by Michael Scott
Post by Ryan Rawson
for
Post by Michael Scott
Post by Michael Scott
the IP address which matches the fully qualified domain name, which
is
Post by Michael Scott
Post by Ryan Rawson
what
Post by Michael Scott
Post by Michael Scott
hbase is trying to use. Is there some way for me to configure hbase
to
Post by Michael Scott
Post by Ryan Rawson
use
Post by Michael Scott
Post by Michael Scott
localhost instead of the fully qualified domain name for the master?
I
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
would have thought this was done by default, or that there would be
an
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
obvious line in some conf file, but I can't find it.
Thanks again,
Michael
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on this
machine?
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
Post by Michael Scott
That's correct. I tried a number of different ports to see if
there
Post by Michael Scott
Post by Ryan Rawson
was
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
something weird, and then I shut down the hadoop server and tried
to
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
connect
to 50010 (which of course should have been free at that point) but
got
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
the
Post by Michael Scott
same "cannot assign to requested address" error. If I start
hadoop,
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if FC11
is
Post by Michael Scott
Post by Ryan Rawson
my
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
problem.
Michael
On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase with
hadoop
Post by Michael Scott
off
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd. We
hadoop
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
Michael Scott
2010-09-16 14:04:49 UTC
Permalink
This sounds promising, I have one quick question about your steps: where in
the Hbase config *site*.xml did you make the change back to localhost? My
hbase master is using the public IP address (97.86.88.18), and I don't think
I've told it to. I want to convince hbase to get rid of the line in the log
file that says something like:

2010-09-16 09:59:21,727 INFO org.apache.hadoop.hbase.master.HMaster: My
address is 97-86-88-18.static.aldl.mi.charter.com:60000

(Note that my /etc/hosts has only the one line
127.0.0.1 localhost.localdomain localhost
since I'm not running ipv6, but somehow hbase knows that the interface is a
comcast static address. I can use /etc/hosts to change that to the
registered domain name for 97-86-88-18, but this doesn't help.)

To reply to Ryan's question, my ifconfig gives:

eth0 Link encap:Ethernet HWaddr 00:24:E8:01:DA:B8
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:319475 errors:0 dropped:0 overruns:0 frame:0
TX packets:290698 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:108186958 (103.1 MiB) TX bytes:187845633 (179.1 MiB)
Interrupt:28 Base address:0xa000

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:370795 errors:0 dropped:0 overruns:0 frame:0
TX packets:370795 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:108117402 (103.1 MiB) TX bytes:108117402 (103.1 MiB)

Thanks a bunch!

Michael
Post by Todd Lipcon
Hi Michael,
I was having a similar problem and following this thread for any
suggestions. I tried everything suggested and more.
I was trying to run Hadoop/Hbase pseudo distributed version on my Mac. I
initially started with Hadoop 21.0 and Hbase 0.89 versions. I had exactly
the same error that you were getting. Then switched to Hadoop 20.2 and Hbase
20.6 - still HMaster was not starting. Then finally it worked. Below are my
steps to success :)
* stopped hbase
* stopped hadoop
* run jps; RegionServer was still running; killed it manually
* in tmp directory (where hadoop namenode and *.pid files are stored) I
removed everything related to hadoop and hbase, including the directories.
(I had no data in Hadoop, so I could do this)
* changed the ports back to default 600**
* changed back Hadoop and Hbase configurations to "localhost" in *site*.xml
and regionservers. (Only I will be using this - no remote connection)
127.0.0.1 localhost
::1 localhost
fe80::1%lo0 localhost
* reformatted the Hadoop namenode
* started Hadoop
* started HBase and it worked :)
Let me know if you want to know any specific configuration.
N.Gesli
Post by Ryan Rawson
What is your ifconfig output looking like?
Post by Michael Scott
Thanks for the continued advice. I am still confused by the different
behaviors of hadoop and hbase. As I said before, I can't get hbase to
work
Post by Michael Scott
on any of the ports that hadoop works on, so I guess hadoop and hbase
are
Post by Ryan Rawson
Post by Michael Scott
using different interfaces. Why is this, and can't I ask hbase to use
the
Post by Michael Scott
interface that hadoop uses? What interfaces are hadoop and hbase
using?
Post by Ryan Rawson
Post by Michael Scott
Also (and maybe this is the wrong forum for this question), how can I
get
Post by Ryan Rawson
my
Post by Michael Scott
OS to allow me to open 60000 using the IP address? I have temporarily
disabled selinux and iptables, as I thought that this would simply
allow
Post by Ryan Rawson
all
Post by Michael Scott
bash-4.0$ nc -l 60000 > /tmp/nc.out
bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
(returns "nc: Cannot assign requested address"; I get the same error
for
Post by Ryan Rawson
the
Post by Michael Scott
hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0 is
allowed)
I am trying to get hbase running for a socorro server, which will
running
Post by Ryan Rawson
Post by Michael Scott
locally. I don't know if that matters.
Thanks,
Michael
Post by Ryan Rawson
Hey,
If you bind to localhost you wont actually be reachable by anyone!
The question is why is your OS disallowing binds to a specific
interface/port combo?
HBase does not really run in a blended/multihomed environment...
meaning if you have multiple interfaces, you have to choose one that
we work over. This is because we need to know a singular canonical
IP/name for any given server because we put that info up inside
ZooKeeper and META tables. So it's not just an artificial constraint,
but exists for cluster management needs.
Having said that, we do work on multihomed machines, eg: ec2, you
might bind hbase to the internal interface taking advantage of the
unmetered/faster network. Also better for security as well.
Let us know if you need more background on how we use the network and
why.
Post by Michael Scott
Post by Ryan Rawson
-ryan
Post by Michael Scott
Hi again,
I think the hbase server master is not starting because it is
attempting
Post by Michael Scott
Post by Ryan Rawson
to
Post by Michael Scott
open port 60000 on its public IP address, rather than using
localhost.
Post by Ryan Rawson
I
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
cannot seem to figure out how to force it (well, configure it) to
attempt
Post by Michael Scott
Post by Ryan Rawson
to
Post by Michael Scott
bind to localhost:60000 instead. As far as I can see, this is set
in
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
org/apache/hadoop/hbase/master/HMaster.java
I don't know much about java, so I'd prefer not to edit the source
if
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
there
Post by Michael Scott
is an option, but I will if necessary. Can someone please point me
to
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
way to change this setting? Any help would be greatly appreciated.
Thanks,
Michael
On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <
Post by Michael Scott
Hi again,
IPV6 was enabled. I shut it off, rebooted to be sure, verified it
was
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
still off, and encountered the same problem once again.
I also tried to open port 60000 by hand with a small php file. I
can
Post by Ryan Rawson
do
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
this (as any user) for localhost. I can NOT do this (not even as
root)
Post by Michael Scott
Post by Ryan Rawson
for
Post by Michael Scott
Post by Michael Scott
the IP address which matches the fully qualified domain name, which
is
Post by Michael Scott
Post by Ryan Rawson
what
Post by Michael Scott
Post by Michael Scott
hbase is trying to use. Is there some way for me to configure
hbase
Post by Ryan Rawson
to
Post by Michael Scott
Post by Ryan Rawson
use
Post by Michael Scott
Post by Michael Scott
localhost instead of the fully qualified domain name for the
master?
Post by Ryan Rawson
I
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
would have thought this was done by default, or that there would be
an
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
obvious line in some conf file, but I can't find it.
Thanks again,
Michael
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on this
machine?
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
Post by Michael Scott
That's correct. I tried a number of different ports to see if
there
Post by Michael Scott
Post by Ryan Rawson
was
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
something weird, and then I shut down the hadoop server and
tried
Post by Ryan Rawson
to
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
connect
to 50010 (which of course should have been free at that point)
but
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
got
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
the
Post by Michael Scott
same "cannot assign to requested address" error. If I start
hadoop,
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if FC11
is
Post by Michael Scott
Post by Ryan Rawson
my
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
problem.
Michael
On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase
with
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
hadoop
Post by Michael Scott
off
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd. We
hadoop
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
N.N. Gesli
2010-09-16 17:53:51 UTC
Permalink
I have this in hbase-site.xml:

<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
<description>The directory shared by region servers.
Should be fully-qualified to include the filesystem to use.
E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
</description>
</property>

<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>For psuedo-distributed, you want to set this to true.
false means that HBase tries to put Master + RegionServers in one
process.
Pseudo-distributed = seperate processes/pids</description>
</property>

<property>
<name>hbase.regionserver.hlog.replication</name>
<value>1</value>
<description>For HBase to offer good data durability, we roll logs if
filesystem replication falls below a certain amount. In
psuedo-distributed
mode, you normally only have the local filesystem or 1 HDFS DataNode, so
you
don't want to roll logs constantly.</description>
</property>

<property>
<name>hbase.tmp.dir</name>
<value>/tmp/hbase-testing</value>
<description>Temporary directory on the local filesystem.</description>
</property>

I also hase Hadoop conf directory in HBASE_CLASSPATH (hbase-env.sh).

I just tried etc/hosts with "127.0.0.1 localhost.localdomain
localhost" line. I got the same error I was getting before. I switched it
back to "127.0.0.1 localhost" and it worked. In between those changes,
I stopped hbase, hadoop and killed still running region server. I hope that
helps.

N.Gesli
Post by Michael Scott
This sounds promising, I have one quick question about your steps: where in
the Hbase config *site*.xml did you make the change back to localhost? My
hbase master is using the public IP address (97.86.88.18), and I don't think
I've told it to. I want to convince hbase to get rid of the line in the log
2010-09-16 09:59:21,727 INFO org.apache.hadoop.hbase.master.HMaster: My
address is 97-86-88-18.static.aldl.mi.charter.com:60000
(Note that my /etc/hosts has only the one line
127.0.0.1 localhost.localdomain localhost
since I'm not running ipv6, but somehow hbase knows that the interface is a
comcast static address. I can use /etc/hosts to change that to the
registered domain name for 97-86-88-18, but this doesn't help.)
eth0 Link encap:Ethernet HWaddr 00:24:E8:01:DA:B8
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:319475 errors:0 dropped:0 overruns:0 frame:0
TX packets:290698 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:108186958 (103.1 MiB) TX bytes:187845633 (179.1 MiB)
Interrupt:28 Base address:0xa000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:370795 errors:0 dropped:0 overruns:0 frame:0
TX packets:370795 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:108117402 (103.1 MiB) TX bytes:108117402 (103.1 MiB)
Thanks a bunch!
Michael
Post by Todd Lipcon
Hi Michael,
I was having a similar problem and following this thread for any
suggestions. I tried everything suggested and more.
I was trying to run Hadoop/Hbase pseudo distributed version on my Mac. I
initially started with Hadoop 21.0 and Hbase 0.89 versions. I had exactly
the same error that you were getting. Then switched to Hadoop 20.2 and Hbase
20.6 - still HMaster was not starting. Then finally it worked. Below are
my
Post by Todd Lipcon
steps to success :)
* stopped hbase
* stopped hadoop
* run jps; RegionServer was still running; killed it manually
* in tmp directory (where hadoop namenode and *.pid files are stored) I
removed everything related to hadoop and hbase, including the
directories.
Post by Todd Lipcon
(I had no data in Hadoop, so I could do this)
* changed the ports back to default 600**
* changed back Hadoop and Hbase configurations to "localhost" in
*site*.xml
Post by Todd Lipcon
and regionservers. (Only I will be using this - no remote connection)
127.0.0.1 localhost
::1 localhost
fe80::1%lo0 localhost
* reformatted the Hadoop namenode
* started Hadoop
* started HBase and it worked :)
Let me know if you want to know any specific configuration.
N.Gesli
Post by Ryan Rawson
What is your ifconfig output looking like?
Post by Michael Scott
Thanks for the continued advice. I am still confused by the
different
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
behaviors of hadoop and hbase. As I said before, I can't get hbase to
work
Post by Michael Scott
on any of the ports that hadoop works on, so I guess hadoop and hbase
are
Post by Ryan Rawson
Post by Michael Scott
using different interfaces. Why is this, and can't I ask hbase to
use
Post by Todd Lipcon
Post by Ryan Rawson
the
Post by Michael Scott
interface that hadoop uses? What interfaces are hadoop and hbase
using?
Post by Ryan Rawson
Post by Michael Scott
Also (and maybe this is the wrong forum for this question), how can I
get
Post by Ryan Rawson
my
Post by Michael Scott
OS to allow me to open 60000 using the IP address? I have
temporarily
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
disabled selinux and iptables, as I thought that this would simply
allow
Post by Ryan Rawson
all
Post by Michael Scott
bash-4.0$ nc -l 60000 > /tmp/nc.out
bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
(returns "nc: Cannot assign requested address"; I get the same error
for
Post by Ryan Rawson
the
Post by Michael Scott
hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0 is
allowed)
I am trying to get hbase running for a socorro server, which will
running
Post by Ryan Rawson
Post by Michael Scott
locally. I don't know if that matters.
Thanks,
Michael
Post by Ryan Rawson
Hey,
If you bind to localhost you wont actually be reachable by anyone!
The question is why is your OS disallowing binds to a specific
interface/port combo?
HBase does not really run in a blended/multihomed environment...
meaning if you have multiple interfaces, you have to choose one that
we work over. This is because we need to know a singular canonical
IP/name for any given server because we put that info up inside
ZooKeeper and META tables. So it's not just an artificial
constraint,
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
but exists for cluster management needs.
Having said that, we do work on multihomed machines, eg: ec2, you
might bind hbase to the internal interface taking advantage of the
unmetered/faster network. Also better for security as well.
Let us know if you need more background on how we use the network
and
Post by Todd Lipcon
Post by Ryan Rawson
why.
Post by Michael Scott
Post by Ryan Rawson
-ryan
On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <
Post by Michael Scott
Hi again,
I think the hbase server master is not starting because it is
attempting
Post by Michael Scott
Post by Ryan Rawson
to
Post by Michael Scott
open port 60000 on its public IP address, rather than using
localhost.
Post by Ryan Rawson
I
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
cannot seem to figure out how to force it (well, configure it) to
attempt
Post by Michael Scott
Post by Ryan Rawson
to
Post by Michael Scott
bind to localhost:60000 instead. As far as I can see, this is
set
Post by Todd Lipcon
in
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
org/apache/hadoop/hbase/master/HMaster.java
I don't know much about java, so I'd prefer not to edit the source
if
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
there
Post by Michael Scott
is an option, but I will if necessary. Can someone please point
me
Post by Todd Lipcon
to
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
way to change this setting? Any help would be greatly
appreciated.
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Thanks,
Michael
On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <
Post by Michael Scott
Hi again,
IPV6 was enabled. I shut it off, rebooted to be sure, verified
it
Post by Todd Lipcon
Post by Ryan Rawson
was
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
still off, and encountered the same problem once again.
I also tried to open port 60000 by hand with a small php file. I
can
Post by Ryan Rawson
do
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
this (as any user) for localhost. I can NOT do this (not even as
root)
Post by Michael Scott
Post by Ryan Rawson
for
Post by Michael Scott
Post by Michael Scott
the IP address which matches the fully qualified domain name,
which
Post by Todd Lipcon
Post by Ryan Rawson
is
Post by Michael Scott
Post by Ryan Rawson
what
Post by Michael Scott
Post by Michael Scott
hbase is trying to use. Is there some way for me to configure
hbase
Post by Ryan Rawson
to
Post by Michael Scott
Post by Ryan Rawson
use
Post by Michael Scott
Post by Michael Scott
localhost instead of the fully qualified domain name for the
master?
Post by Ryan Rawson
I
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
would have thought this was done by default, or that there would
be
Post by Todd Lipcon
Post by Ryan Rawson
an
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
obvious line in some conf file, but I can't find it.
Thanks again,
Michael
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on this
machine?
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
Post by Michael Scott
That's correct. I tried a number of different ports to see if
there
Post by Michael Scott
Post by Ryan Rawson
was
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
something weird, and then I shut down the hadoop server and
tried
Post by Ryan Rawson
to
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
connect
to 50010 (which of course should have been free at that point)
but
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
got
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
the
Post by Michael Scott
same "cannot assign to requested address" error. If I start
hadoop,
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if
FC11
Post by Todd Lipcon
Post by Ryan Rawson
is
Post by Michael Scott
Post by Ryan Rawson
my
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
problem.
Michael
On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting hbase
with
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
hadoop
Post by Michael Scott
off
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd. We
hadoop
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
Michael Scott
2010-09-16 20:06:45 UTC
Permalink
Thanks again. This changes the behavior, but it does not yet fix my
problem. The hbase.rootdir property forces the hbase master to stay alive
for a little while, so I had a moment of short-lived euphoria when Hmaster
appeared in the jps list, but this only lasts while it tries to connect to
localhost:9000 (which is not open), and it still doesn't open port 60000 and
it still thinks it is named my-static-ip.com (i.e., same error message as
before). The removal of localhost.localdomain from /etc/hosts made no
difference one way or the other. I still am looking for a way to try to
have hbase bind to localhost:6000 instead of my-static-ip.com:6000. I will
also try to see why localhost:9000 is not open (though that appears later in
the log file, so I don't think it is causing the failure to open 60000).
Thanks for the help so far, I will post again with further info.

Michael
Post by N.N. Gesli
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
<description>The directory shared by region servers.
Should be fully-qualified to include the filesystem to use.
E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>For psuedo-distributed, you want to set this to true.
false means that HBase tries to put Master + RegionServers in one
process.
Pseudo-distributed = seperate processes/pids</description>
</property>
<property>
<name>hbase.regionserver.hlog.replication</name>
<value>1</value>
<description>For HBase to offer good data durability, we roll logs if
filesystem replication falls below a certain amount. In
psuedo-distributed
mode, you normally only have the local filesystem or 1 HDFS DataNode, so
you
don't want to roll logs constantly.</description>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/tmp/hbase-testing</value>
<description>Temporary directory on the local filesystem.</description>
</property>
I also hase Hadoop conf directory in HBASE_CLASSPATH (hbase-env.sh).
I just tried etc/hosts with "127.0.0.1 localhost.localdomain
localhost" line. I got the same error I was getting before. I switched it
back to "127.0.0.1 localhost" and it worked. In between those changes,
I stopped hbase, hadoop and killed still running region server. I hope that
helps.
N.Gesli
Post by Michael Scott
This sounds promising, I have one quick question about your steps: where in
the Hbase config *site*.xml did you make the change back to localhost?
My
Post by Michael Scott
hbase master is using the public IP address (97.86.88.18), and I don't think
I've told it to. I want to convince hbase to get rid of the line in the log
2010-09-16 09:59:21,727 INFO org.apache.hadoop.hbase.master.HMaster: My
address is 97-86-88-18.static.aldl.mi.charter.com:60000
(Note that my /etc/hosts has only the one line
127.0.0.1 localhost.localdomain localhost
since I'm not running ipv6, but somehow hbase knows that the interface is
a
Post by Michael Scott
comcast static address. I can use /etc/hosts to change that to the
registered domain name for 97-86-88-18, but this doesn't help.)
eth0 Link encap:Ethernet HWaddr 00:24:E8:01:DA:B8
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:319475 errors:0 dropped:0 overruns:0 frame:0
TX packets:290698 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:108186958 (103.1 MiB) TX bytes:187845633 (179.1 MiB)
Interrupt:28 Base address:0xa000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:370795 errors:0 dropped:0 overruns:0 frame:0
TX packets:370795 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:108117402 (103.1 MiB) TX bytes:108117402 (103.1 MiB)
Thanks a bunch!
Michael
Post by Todd Lipcon
Hi Michael,
I was having a similar problem and following this thread for any
suggestions. I tried everything suggested and more.
I was trying to run Hadoop/Hbase pseudo distributed version on my Mac.
I
Post by Michael Scott
Post by Todd Lipcon
initially started with Hadoop 21.0 and Hbase 0.89 versions. I had
exactly
Post by Michael Scott
Post by Todd Lipcon
the same error that you were getting. Then switched to Hadoop 20.2 and Hbase
20.6 - still HMaster was not starting. Then finally it worked. Below
are
Post by Michael Scott
my
Post by Todd Lipcon
steps to success :)
* stopped hbase
* stopped hadoop
* run jps; RegionServer was still running; killed it manually
* in tmp directory (where hadoop namenode and *.pid files are stored) I
removed everything related to hadoop and hbase, including the
directories.
Post by Todd Lipcon
(I had no data in Hadoop, so I could do this)
* changed the ports back to default 600**
* changed back Hadoop and Hbase configurations to "localhost" in
*site*.xml
Post by Todd Lipcon
and regionservers. (Only I will be using this - no remote connection)
* changed back my /etc/hosts to the original version. It looks like
127.0.0.1 localhost
::1 localhost
fe80::1%lo0 localhost
* reformatted the Hadoop namenode
* started Hadoop
* started HBase and it worked :)
Let me know if you want to know any specific configuration.
N.Gesli
Post by Ryan Rawson
What is your ifconfig output looking like?
On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott <
Post by Michael Scott
Thanks for the continued advice. I am still confused by the
different
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
behaviors of hadoop and hbase. As I said before, I can't get hbase
to
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
work
Post by Michael Scott
on any of the ports that hadoop works on, so I guess hadoop and
hbase
Post by Michael Scott
Post by Todd Lipcon
are
Post by Ryan Rawson
Post by Michael Scott
using different interfaces. Why is this, and can't I ask hbase to
use
Post by Todd Lipcon
Post by Ryan Rawson
the
Post by Michael Scott
interface that hadoop uses? What interfaces are hadoop and hbase
using?
Post by Ryan Rawson
Post by Michael Scott
Also (and maybe this is the wrong forum for this question), how can
I
Post by Michael Scott
Post by Todd Lipcon
get
Post by Ryan Rawson
my
Post by Michael Scott
OS to allow me to open 60000 using the IP address? I have
temporarily
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
disabled selinux and iptables, as I thought that this would simply
allow
Post by Ryan Rawson
all
Post by Michael Scott
bash-4.0$ nc -l 60000 > /tmp/nc.out
bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
(returns "nc: Cannot assign requested address"; I get the same
error
Post by Michael Scott
Post by Todd Lipcon
for
Post by Ryan Rawson
the
Post by Michael Scott
hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0
is
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
allowed)
I am trying to get hbase running for a socorro server, which will
running
Post by Ryan Rawson
Post by Michael Scott
locally. I don't know if that matters.
Thanks,
Michael
Post by Ryan Rawson
Hey,
If you bind to localhost you wont actually be reachable by anyone!
The question is why is your OS disallowing binds to a specific
interface/port combo?
HBase does not really run in a blended/multihomed environment...
meaning if you have multiple interfaces, you have to choose one
that
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
we work over. This is because we need to know a singular
canonical
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
IP/name for any given server because we put that info up inside
ZooKeeper and META tables. So it's not just an artificial
constraint,
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
but exists for cluster management needs.
Having said that, we do work on multihomed machines, eg: ec2, you
might bind hbase to the internal interface taking advantage of the
unmetered/faster network. Also better for security as well.
Let us know if you need more background on how we use the network
and
Post by Todd Lipcon
Post by Ryan Rawson
why.
Post by Michael Scott
Post by Ryan Rawson
-ryan
On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <
Post by Michael Scott
Hi again,
I think the hbase server master is not starting because it is
attempting
Post by Michael Scott
Post by Ryan Rawson
to
Post by Michael Scott
open port 60000 on its public IP address, rather than using
localhost.
Post by Ryan Rawson
I
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
cannot seem to figure out how to force it (well, configure it)
to
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
attempt
Post by Michael Scott
Post by Ryan Rawson
to
Post by Michael Scott
bind to localhost:60000 instead. As far as I can see, this is
set
Post by Todd Lipcon
in
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
org/apache/hadoop/hbase/master/HMaster.java
I don't know much about java, so I'd prefer not to edit the
source
Post by Michael Scott
Post by Todd Lipcon
if
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
there
Post by Michael Scott
is an option, but I will if necessary. Can someone please point
me
Post by Todd Lipcon
to
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
way to change this setting? Any help would be greatly
appreciated.
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Thanks,
Michael
On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <
Post by Michael Scott
Hi again,
IPV6 was enabled. I shut it off, rebooted to be sure, verified
it
Post by Todd Lipcon
Post by Ryan Rawson
was
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
still off, and encountered the same problem once again.
I also tried to open port 60000 by hand with a small php file.
I
Post by Michael Scott
Post by Todd Lipcon
can
Post by Ryan Rawson
do
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
this (as any user) for localhost. I can NOT do this (not even
as
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
root)
Post by Michael Scott
Post by Ryan Rawson
for
Post by Michael Scott
Post by Michael Scott
the IP address which matches the fully qualified domain name,
which
Post by Todd Lipcon
Post by Ryan Rawson
is
Post by Michael Scott
Post by Ryan Rawson
what
Post by Michael Scott
Post by Michael Scott
hbase is trying to use. Is there some way for me to configure
hbase
Post by Ryan Rawson
to
Post by Michael Scott
Post by Ryan Rawson
use
Post by Michael Scott
Post by Michael Scott
localhost instead of the fully qualified domain name for the
master?
Post by Ryan Rawson
I
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
would have thought this was done by default, or that there
would
Post by Michael Scott
be
Post by Todd Lipcon
Post by Ryan Rawson
an
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
obvious line in some conf file, but I can't find it.
Thanks again,
Michael
On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on this
machine?
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Check out this hadoop JIRA that might be related for some
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
Post by Michael Scott
That's correct. I tried a number of different ports to see
if
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
there
Post by Michael Scott
Post by Ryan Rawson
was
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
something weird, and then I shut down the hadoop server and
tried
Post by Ryan Rawson
to
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
connect
to 50010 (which of course should have been free at that
point)
Post by Michael Scott
Post by Todd Lipcon
but
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
got
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
the
Post by Michael Scott
same "cannot assign to requested address" error. If I start
hadoop,
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if
FC11
Post by Todd Lipcon
Post by Ryan Rawson
is
Post by Michael Scott
Post by Ryan Rawson
my
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
problem.
Michael
On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting
hbase
Post by Michael Scott
Post by Todd Lipcon
with
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
hadoop
Post by Michael Scott
off
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd.
We
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
hadoop
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
Ryan Rawson
2010-09-16 20:51:03 UTC
Permalink
Hey,

Ok the picture is all clear.

So HBase is a minimally configured system... You dont want to specify
the bind address in your config file, because usually you have 1 file
that you distribute to dozens or even potentially hundreds of systems.
So specifying configuration for 1 system is just not really the way
to go with clustered software.

So what does hbase do? We need to know the node's identity so when we
register ourselves we know what our IP is, and that IP goes into the
META table. So we grab the hostname (as per 'hostname' on most
systems). Then reverse DNS it, use that IP to bind to.

In this case, the problem is your hostname is reversing to the
external IP which your host doesnt actually have an interface to. If
you want to run internal network services behind a NAT you will need
to have local IPs and hostnames and not reuse your external name/IP as
internal hostnames.

So, change your hostname to 'myhost' and make sure it DNS reverses to
10.0.0.2 (your real IP) and you should be off to the races.

-ryan
Thanks again.  This changes the behavior, but it does not yet fix my
problem.  The hbase.rootdir property forces the hbase master to stay alive
for a little while, so I had a moment of short-lived euphoria when Hmaster
appeared in the jps list, but this only lasts while it tries to connect to
localhost:9000 (which is not open), and it still doesn't open port 60000 and
it still thinks it is named my-static-ip.com (i.e., same error message as
before).  The removal of localhost.localdomain from /etc/hosts made no
difference one way or the other.  I still am looking for a way to try to
have hbase bind to localhost:6000 instead of my-static-ip.com:6000.  I will
also try to see why localhost:9000 is not open (though that appears later in
the log file, so I don't think it is causing the failure to open 60000).
Thanks for the help so far, I will post again with further info.
Michael
 <property>
   <name>hbase.rootdir</name>
   <value>hdfs://localhost:9000/hbase</value>
   <description>The directory shared by region servers.
   Should be fully-qualified to include the filesystem to use.
   E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
   </description>
 </property>
 <property>
   <name>hbase.cluster.distributed</name>
   <value>true</value>
   <description>For psuedo-distributed, you want to set this to true.
   false means that HBase tries to put Master + RegionServers in one
process.
   Pseudo-distributed = seperate processes/pids</description>
 </property>
 <property>
   <name>hbase.regionserver.hlog.replication</name>
   <value>1</value>
   <description>For HBase to offer good data durability, we roll logs if
   filesystem replication falls below a certain amount.  In
psuedo-distributed
   mode, you normally only have the local filesystem or 1 HDFS DataNode, so
you
   don't want to roll logs constantly.</description>
 </property>
 <property>
   <name>hbase.tmp.dir</name>
   <value>/tmp/hbase-testing</value>
   <description>Temporary directory on the local filesystem.</description>
 </property>
I also hase Hadoop conf directory in HBASE_CLASSPATH (hbase-env.sh).
I just tried etc/hosts with "127.0.0.1               localhost.localdomain
localhost" line. I got the same error I was getting before. I switched it
back to "127.0.0.1       localhost" and it worked. In between those
changes,
I stopped hbase, hadoop and killed still running region server. I hope that
helps.
N.Gesli
This sounds promising, I have one quick question about your steps:  where
in
the Hbase config *site*.xml did you make the change back to localhost?
 My
hbase master is using the public IP address (97.86.88.18), and I don't think
I've told it to.  I want to convince hbase to get rid of the line in the
log
2010-09-16 09:59:21,727 INFO org.apache.hadoop.hbase.master.HMaster: My
address is 97-86-88-18.static.aldl.mi.charter.com:60000
(Note that my /etc/hosts has only the one line
127.0.0.1               localhost.localdomain localhost
since I'm not running ipv6, but somehow hbase knows that the interface is
a
comcast static address.  I can use /etc/hosts to change that to the
registered domain name for 97-86-88-18, but this doesn't help.)
eth0      Link encap:Ethernet  HWaddr 00:24:E8:01:DA:B8
         inet addr:10.0.0.2  Bcast:10.0.0.255  Mask:255.255.255.0
         UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
         RX packets:319475 errors:0 dropped:0 overruns:0 frame:0
         TX packets:290698 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:1000
         RX bytes:108186958 (103.1 MiB)  TX bytes:187845633 (179.1 MiB)
         Interrupt:28 Base address:0xa000
lo        Link encap:Local Loopback
         inet addr:127.0.0.1  Mask:255.0.0.0
         UP LOOPBACK RUNNING  MTU:16436  Metric:1
         RX packets:370795 errors:0 dropped:0 overruns:0 frame:0
         TX packets:370795 errors:0 dropped:0 overruns:0 carrier:0
         collisions:0 txqueuelen:0
         RX bytes:108117402 (103.1 MiB)  TX bytes:108117402 (103.1 MiB)
Thanks a bunch!
Michael
Post by Todd Lipcon
Hi Michael,
I was having a similar problem and following this thread for any
suggestions. I tried everything suggested and more.
I was trying to run Hadoop/Hbase pseudo distributed version on my Mac.
I
Post by Todd Lipcon
initially started with Hadoop 21.0 and Hbase 0.89 versions. I had
exactly
Post by Todd Lipcon
the same error that you were getting. Then switched to Hadoop 20.2 and Hbase
20.6 - still HMaster was not starting. Then finally it worked. Below
are
my
Post by Todd Lipcon
steps to success :)
* stopped hbase
* stopped hadoop
* run jps; RegionServer was still running; killed it manually
* in tmp directory (where hadoop namenode and *.pid files are stored) I
removed everything related to hadoop and hbase, including the
directories.
Post by Todd Lipcon
(I had no data in Hadoop, so I could do this)
* changed the ports back to default 600**
* changed back Hadoop and Hbase configurations to "localhost" in
*site*.xml
Post by Todd Lipcon
and regionservers. (Only I will be using this - no remote connection)
* changed back my /etc/hosts to the original version. It looks like
127.0.0.1    localhost
::1             localhost
fe80::1%lo0    localhost
* reformatted the Hadoop namenode
* started Hadoop
* started HBase and it worked :)
Let me know if you want to know any specific configuration.
N.Gesli
Post by Ryan Rawson
What is your ifconfig output looking like?
On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott <
Thanks for the continued advice.  I am still confused by the
different
Post by Todd Lipcon
Post by Ryan Rawson
behaviors of hadoop and hbase. As I said before, I can't get hbase
to
Post by Todd Lipcon
Post by Ryan Rawson
work
on any of the ports that hadoop works on, so I guess hadoop and
hbase
Post by Todd Lipcon
are
Post by Ryan Rawson
using different interfaces.  Why is this, and can't I ask hbase to
use
Post by Todd Lipcon
Post by Ryan Rawson
the
interface that hadoop uses?  What interfaces are hadoop and hbase
using?
Post by Ryan Rawson
Also (and maybe this is the wrong forum for this question), how can
I
Post by Todd Lipcon
get
Post by Ryan Rawson
my
OS to allow me to open 60000 using the IP address?  I have
temporarily
Post by Todd Lipcon
Post by Ryan Rawson
disabled selinux and iptables, as I thought that this would simply
allow
Post by Ryan Rawson
all
bash-4.0$ nc -l  60000 > /tmp/nc.out
bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
(returns "nc: Cannot assign requested address"; I get the same
error
Post by Todd Lipcon
for
Post by Ryan Rawson
the
hostname instead of the IP address, and for 10.0.0.1, but 10.0.0.0
is
Post by Todd Lipcon
Post by Ryan Rawson
allowed)
I am trying to get hbase running for a socorro server, which will
running
Post by Ryan Rawson
locally.  I don't know if that matters.
Thanks,
Michael
Post by Ryan Rawson
Hey,
If you bind to localhost you wont actually be reachable by anyone!
The question is why is your OS disallowing binds to a specific
interface/port combo?
HBase does not really run in a blended/multihomed environment...
meaning if you have multiple interfaces, you have to choose one
that
Post by Todd Lipcon
Post by Ryan Rawson
Post by Ryan Rawson
we work over.  This is because we need to know a singular
canonical
Post by Todd Lipcon
Post by Ryan Rawson
Post by Ryan Rawson
IP/name for any given server because we put that info up inside
ZooKeeper and META tables.  So it's not just an artificial
constraint,
Post by Todd Lipcon
Post by Ryan Rawson
Post by Ryan Rawson
but exists for cluster management needs.
Having said that, we do work on multihomed machines, eg: ec2, you
might bind hbase to the internal interface taking advantage of the
unmetered/faster network. Also better for security as well.
Let us know if you need more background on how we use the network
and
Post by Todd Lipcon
Post by Ryan Rawson
why.
Post by Ryan Rawson
-ryan
On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <
Post by Michael Scott
Hi again,
I think the hbase server master is not starting because it is
attempting
Post by Ryan Rawson
to
Post by Michael Scott
open port 60000 on its public IP address, rather than using
localhost.
Post by Ryan Rawson
 I
Post by Ryan Rawson
Post by Michael Scott
cannot seem to figure out how to force it (well, configure it)
to
Post by Todd Lipcon
Post by Ryan Rawson
attempt
Post by Ryan Rawson
to
Post by Michael Scott
bind to localhost:60000 instead.  As far as I can see,  this is
set
Post by Todd Lipcon
in
Post by Ryan Rawson
Post by Ryan Rawson
the
Post by Michael Scott
org/apache/hadoop/hbase/master/HMaster.java
I don't know much about java, so I'd prefer not to edit the
source
Post by Todd Lipcon
if
Post by Ryan Rawson
Post by Ryan Rawson
there
Post by Michael Scott
is an option, but I will if necessary.  Can someone please point
me
Post by Todd Lipcon
to
Post by Ryan Rawson
Post by Ryan Rawson
the
Post by Michael Scott
way to change this setting?  Any help would be greatly
appreciated.
Post by Todd Lipcon
Post by Ryan Rawson
Post by Ryan Rawson
Post by Michael Scott
Thanks,
Michael
On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <
Post by Michael Scott
Hi again,
IPV6 was enabled.  I shut it off, rebooted to be sure, verified
it
Post by Todd Lipcon
Post by Ryan Rawson
was
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
still off, and encountered the same problem once again.
I also tried to open port 60000 by hand with a small php file.
 I
Post by Todd Lipcon
can
Post by Ryan Rawson
do
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
this (as any user) for localhost.  I can NOT do this (not even
as
Post by Todd Lipcon
Post by Ryan Rawson
root)
Post by Ryan Rawson
for
Post by Michael Scott
Post by Michael Scott
the IP address which matches the fully qualified domain name,
which
Post by Todd Lipcon
Post by Ryan Rawson
is
Post by Ryan Rawson
what
Post by Michael Scott
Post by Michael Scott
hbase is trying to use.  Is there some way for me to configure
hbase
Post by Ryan Rawson
to
Post by Ryan Rawson
use
Post by Michael Scott
Post by Michael Scott
localhost instead of the fully qualified domain name for the
master?
Post by Ryan Rawson
 I
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
would have thought this was done by default, or that there
would
be
Post by Todd Lipcon
Post by Ryan Rawson
an
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
obvious line in some conf file, but I can't find it.
Thanks again,
Michael
On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on this
machine?
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Check out this hadoop JIRA that might be related for some
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
That's correct.  I tried a number of different ports to see
if
Post by Todd Lipcon
Post by Ryan Rawson
there
Post by Ryan Rawson
was
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
something weird, and then I shut down the hadoop server and
tried
Post by Ryan Rawson
to
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
connect
to 50010 (which of course should have been free at that
point)
Post by Todd Lipcon
but
Post by Ryan Rawson
Post by Ryan Rawson
got
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
the
same "cannot assign to requested address" error.  If I start
hadoop,
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering if
FC11
Post by Todd Lipcon
Post by Ryan Rawson
is
Post by Ryan Rawson
my
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
problem.
Michael
On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting
hbase
Post by Todd Lipcon
with
Post by Ryan Rawson
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
hadoop
off
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase?  (Odd.
 We
Post by Todd Lipcon
Post by Ryan Rawson
hadoop
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
Michael Scott
2010-09-16 21:25:00 UTC
Permalink
THANK YOU. It is now listening on port 60000

Michael
Post by Ryan Rawson
Hey,
Ok the picture is all clear.
So HBase is a minimally configured system... You dont want to specify
the bind address in your config file, because usually you have 1 file
that you distribute to dozens or even potentially hundreds of systems.
So specifying configuration for 1 system is just not really the way
to go with clustered software.
So what does hbase do? We need to know the node's identity so when we
register ourselves we know what our IP is, and that IP goes into the
META table. So we grab the hostname (as per 'hostname' on most
systems). Then reverse DNS it, use that IP to bind to.
In this case, the problem is your hostname is reversing to the
external IP which your host doesnt actually have an interface to. If
you want to run internal network services behind a NAT you will need
to have local IPs and hostnames and not reuse your external name/IP as
internal hostnames.
So, change your hostname to 'myhost' and make sure it DNS reverses to
10.0.0.2 (your real IP) and you should be off to the races.
-ryan
Post by Michael Scott
Thanks again. This changes the behavior, but it does not yet fix my
problem. The hbase.rootdir property forces the hbase master to stay
alive
Post by Michael Scott
for a little while, so I had a moment of short-lived euphoria when
Hmaster
Post by Michael Scott
appeared in the jps list, but this only lasts while it tries to connect
to
Post by Michael Scott
localhost:9000 (which is not open), and it still doesn't open port 60000
and
Post by Michael Scott
it still thinks it is named my-static-ip.com (i.e., same error message
as
Post by Michael Scott
before). The removal of localhost.localdomain from /etc/hosts made no
difference one way or the other. I still am looking for a way to try to
have hbase bind to localhost:6000 instead of my-static-ip.com:6000. I
will
Post by Michael Scott
also try to see why localhost:9000 is not open (though that appears later
in
Post by Michael Scott
the log file, so I don't think it is causing the failure to open 60000).
Thanks for the help so far, I will post again with further info.
Michael
Post by N.N. Gesli
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
<description>The directory shared by region servers.
Should be fully-qualified to include the filesystem to use.
E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>For psuedo-distributed, you want to set this to true.
false means that HBase tries to put Master + RegionServers in one
process.
Pseudo-distributed = seperate processes/pids</description>
</property>
<property>
<name>hbase.regionserver.hlog.replication</name>
<value>1</value>
<description>For HBase to offer good data durability, we roll logs if
filesystem replication falls below a certain amount. In
psuedo-distributed
mode, you normally only have the local filesystem or 1 HDFS DataNode,
so
Post by Michael Scott
Post by N.N. Gesli
you
don't want to roll logs constantly.</description>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/tmp/hbase-testing</value>
<description>Temporary directory on the local
filesystem.</description>
Post by Michael Scott
Post by N.N. Gesli
</property>
I also hase Hadoop conf directory in HBASE_CLASSPATH (hbase-env.sh).
I just tried etc/hosts with "127.0.0.1
localhost.localdomain
Post by Michael Scott
Post by N.N. Gesli
localhost" line. I got the same error I was getting before. I switched
it
Post by Michael Scott
Post by N.N. Gesli
back to "127.0.0.1 localhost" and it worked. In between those changes,
I stopped hbase, hadoop and killed still running region server. I hope
that
Post by Michael Scott
Post by N.N. Gesli
helps.
N.Gesli
where
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
in
the Hbase config *site*.xml did you make the change back to localhost?
My
Post by Michael Scott
hbase master is using the public IP address (97.86.88.18), and I don't think
I've told it to. I want to convince hbase to get rid of the line in
the
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
log
My
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
address is 97-86-88-18.static.aldl.mi.charter.com:60000
(Note that my /etc/hosts has only the one line
127.0.0.1 localhost.localdomain localhost
since I'm not running ipv6, but somehow hbase knows that the interface
is
Post by Michael Scott
Post by N.N. Gesli
a
Post by Michael Scott
comcast static address. I can use /etc/hosts to change that to the
registered domain name for 97-86-88-18, but this doesn't help.)
eth0 Link encap:Ethernet HWaddr 00:24:E8:01:DA:B8
inet addr:10.0.0.2 Bcast:10.0.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:319475 errors:0 dropped:0 overruns:0 frame:0
TX packets:290698 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:108186958 (103.1 MiB) TX bytes:187845633 (179.1
MiB)
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Interrupt:28 Base address:0xa000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:370795 errors:0 dropped:0 overruns:0 frame:0
TX packets:370795 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:108117402 (103.1 MiB) TX bytes:108117402 (103.1
MiB)
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Thanks a bunch!
Michael
Post by Todd Lipcon
Hi Michael,
I was having a similar problem and following this thread for any
suggestions. I tried everything suggested and more.
I was trying to run Hadoop/Hbase pseudo distributed version on my
Mac.
Post by Michael Scott
Post by N.N. Gesli
I
Post by Michael Scott
Post by Todd Lipcon
initially started with Hadoop 21.0 and Hbase 0.89 versions. I had
exactly
Post by Michael Scott
Post by Todd Lipcon
the same error that you were getting. Then switched to Hadoop 20.2
and
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
Hbase
20.6 - still HMaster was not starting. Then finally it worked. Below
are
Post by Michael Scott
my
Post by Todd Lipcon
steps to success :)
* stopped hbase
* stopped hadoop
* run jps; RegionServer was still running; killed it manually
* in tmp directory (where hadoop namenode and *.pid files are
stored) I
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
removed everything related to hadoop and hbase, including the
directories.
Post by Todd Lipcon
(I had no data in Hadoop, so I could do this)
* changed the ports back to default 600**
* changed back Hadoop and Hbase configurations to "localhost" in
*site*.xml
Post by Todd Lipcon
and regionservers. (Only I will be using this - no remote
connection)
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
* changed back my /etc/hosts to the original version. It looks like
127.0.0.1 localhost
::1 localhost
fe80::1%lo0 localhost
* reformatted the Hadoop namenode
* started Hadoop
* started HBase and it worked :)
Let me know if you want to know any specific configuration.
N.Gesli
Post by Ryan Rawson
What is your ifconfig output looking like?
On Wed, Sep 15, 2010 at 10:07 PM, Michael Scott <
Post by Michael Scott
Thanks for the continued advice. I am still confused by the
different
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
behaviors of hadoop and hbase. As I said before, I can't get
hbase
Post by Michael Scott
Post by N.N. Gesli
to
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
work
Post by Michael Scott
on any of the ports that hadoop works on, so I guess hadoop and
hbase
Post by Michael Scott
Post by Todd Lipcon
are
Post by Ryan Rawson
Post by Michael Scott
using different interfaces. Why is this, and can't I ask hbase
to
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
use
Post by Todd Lipcon
Post by Ryan Rawson
the
Post by Michael Scott
interface that hadoop uses? What interfaces are hadoop and
hbase
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
using?
Post by Ryan Rawson
Post by Michael Scott
Also (and maybe this is the wrong forum for this question), how
can
Post by Michael Scott
Post by N.N. Gesli
I
Post by Michael Scott
Post by Todd Lipcon
get
Post by Ryan Rawson
my
Post by Michael Scott
OS to allow me to open 60000 using the IP address? I have
temporarily
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
disabled selinux and iptables, as I thought that this would
simply
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
allow
Post by Ryan Rawson
all
Post by Michael Scott
bash-4.0$ nc -l 60000 > /tmp/nc.out
bash-4.0$ nc -l 97.86.88.18 60000 > /tmp/nc.out
(returns "nc: Cannot assign requested address"; I get the same
error
Post by Michael Scott
Post by Todd Lipcon
for
Post by Ryan Rawson
the
Post by Michael Scott
hostname instead of the IP address, and for 10.0.0.1, but
10.0.0.0
Post by Michael Scott
Post by N.N. Gesli
is
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
allowed)
I am trying to get hbase running for a socorro server, which
will
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
running
Post by Ryan Rawson
Post by Michael Scott
locally. I don't know if that matters.
Thanks,
Michael
On Wed, Sep 15, 2010 at 6:04 PM, Ryan Rawson <
Post by Ryan Rawson
Hey,
If you bind to localhost you wont actually be reachable by
anyone!
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
The question is why is your OS disallowing binds to a specific
interface/port combo?
HBase does not really run in a blended/multihomed
environment...
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
meaning if you have multiple interfaces, you have to choose one
that
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
we work over. This is because we need to know a singular
canonical
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
IP/name for any given server because we put that info up inside
ZooKeeper and META tables. So it's not just an artificial
constraint,
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
but exists for cluster management needs.
Having said that, we do work on multihomed machines, eg: ec2,
you
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
might bind hbase to the internal interface taking advantage of
the
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
unmetered/faster network. Also better for security as well.
Let us know if you need more background on how we use the
network
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
and
Post by Todd Lipcon
Post by Ryan Rawson
why.
Post by Michael Scott
Post by Ryan Rawson
-ryan
On Wed, Sep 15, 2010 at 10:18 AM, Michael Scott <
Post by Michael Scott
Hi again,
I think the hbase server master is not starting because it is
attempting
Post by Michael Scott
Post by Ryan Rawson
to
Post by Michael Scott
open port 60000 on its public IP address, rather than using
localhost.
Post by Ryan Rawson
I
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
cannot seem to figure out how to force it (well, configure
it)
Post by Michael Scott
Post by N.N. Gesli
to
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
attempt
Post by Michael Scott
Post by Ryan Rawson
to
Post by Michael Scott
bind to localhost:60000 instead. As far as I can see, this
is
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
set
Post by Todd Lipcon
in
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
org/apache/hadoop/hbase/master/HMaster.java
I don't know much about java, so I'd prefer not to edit the
source
Post by Michael Scott
Post by Todd Lipcon
if
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
there
Post by Michael Scott
is an option, but I will if necessary. Can someone please
point
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
me
Post by Todd Lipcon
to
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
the
Post by Michael Scott
way to change this setting? Any help would be greatly
appreciated.
Post by Todd Lipcon
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Thanks,
Michael
On Wed, Sep 15, 2010 at 12:42 AM, Michael Scott <
Post by Michael Scott
Hi again,
IPV6 was enabled. I shut it off, rebooted to be sure,
verified
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
it
Post by Todd Lipcon
Post by Ryan Rawson
was
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
still off, and encountered the same problem once again.
I also tried to open port 60000 by hand with a small php
file.
Post by Michael Scott
Post by N.N. Gesli
I
Post by Michael Scott
Post by Todd Lipcon
can
Post by Ryan Rawson
do
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
this (as any user) for localhost. I can NOT do this (not
even
Post by Michael Scott
Post by N.N. Gesli
as
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
root)
Post by Michael Scott
Post by Ryan Rawson
for
Post by Michael Scott
Post by Michael Scott
the IP address which matches the fully qualified domain
name,
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
which
Post by Todd Lipcon
Post by Ryan Rawson
is
Post by Michael Scott
Post by Ryan Rawson
what
Post by Michael Scott
Post by Michael Scott
hbase is trying to use. Is there some way for me to
configure
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
hbase
Post by Ryan Rawson
to
Post by Michael Scott
Post by Ryan Rawson
use
Post by Michael Scott
Post by Michael Scott
localhost instead of the fully qualified domain name for the
master?
Post by Ryan Rawson
I
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
would have thought this was done by default, or that there
would
Post by Michael Scott
be
Post by Todd Lipcon
Post by Ryan Rawson
an
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
obvious line in some conf file, but I can't find it.
Thanks again,
Michael
On Tue, Sep 14, 2010 at 12:23 PM, Todd Lipcon <
Post by Todd Lipcon
Hi Michael,
It might be related to IPV6. Do you have IPV6 enabled on
this
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
machine?
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Check out this hadoop JIRA that might be related for some
https://issues.apache.org/jira/browse/HADOOP-6056
<https://issues.apache.org/jira/browse/HADOOP-6056>-Todd
On Tue, Sep 14, 2010 at 10:17 AM, Michael Scott <
Post by Michael Scott
That's correct. I tried a number of different ports to
see
Post by Michael Scott
Post by N.N. Gesli
if
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
there
Post by Michael Scott
Post by Ryan Rawson
was
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
something weird, and then I shut down the hadoop server
and
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
tried
Post by Ryan Rawson
to
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
connect
to 50010 (which of course should have been free at that
point)
Post by Michael Scott
Post by Todd Lipcon
but
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
got
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
the
Post by Michael Scott
same "cannot assign to requested address" error. If I
start
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
hadoop,
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
netstat
shows a process listening on 50010.
I am going to try this on a different OS, I am wondering
if
Post by Michael Scott
Post by N.N. Gesli
Post by Michael Scott
FC11
Post by Todd Lipcon
Post by Ryan Rawson
is
Post by Michael Scott
Post by Ryan Rawson
my
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
problem.
Michael
On Tue, Sep 14, 2010 at 11:41 AM, Stack <
On Tue, Sep 14, 2010 at 9:33 AM, Michael Scott <
Post by Michael Scott
I don't see why hadoop binds
to a port but hbase does not (I even tried starting
hbase
Post by Michael Scott
Post by Todd Lipcon
with
Post by Ryan Rawson
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
hadoop
Post by Michael Scott
off
Post by Michael Scott
and binding to 50010, which hadoop uses).
Using 50010 worked for hadoop but not for hbase? (Odd.
We
Post by Michael Scott
Post by Todd Lipcon
Post by Ryan Rawson
hadoop
Post by Michael Scott
Post by Ryan Rawson
Post by Michael Scott
Post by Michael Scott
Post by Todd Lipcon
Post by Michael Scott
their mechanism essentially).
St.Ack
--
Todd Lipcon
Software Engineer, Cloudera
Loading...