Project

General

Profile

Actions

action #55043

closed

[openqa] Unable to resolve dist.suse.de from openQA backend after setting Multi-Machine environment

Added by bchou over 5 years ago. Updated over 5 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Support
Target version:
-
Start date:
2019-08-02
Due date:
% Done:

0%

Estimated time:

Description

Test Suite:

mru-install-minimal-with-addons_fips
qam-openvpn-client_fips
qam-openvpn-server_fips

Description

Create and run the openvpn cases under my local openQA server, and I have already set up the Multi-Machine environment[1] in my local in advance. After setup and meet a problem about that I cannot zypper refresh and install packages from the qa-head repo [2]. It shows could not resolve host: dist.suse.de

I also check that I still cannot can response about ping 10.160.0.100 [dist.suse.de], it turns out to fail from the case. I think it is not DNS problem. I suspect it is the network setting problem in the localhost, and that could be the routing and NAT or Bridge setting problem.

I suspect that if it is the NAT problem that I connect to SUSE openVPN(SUSE-NUE) via tun0 interface.

  • Local openQA Server IP is : 10.100.202.37 (tun0)
  • The eth0 ip is 192.168.1.101 (eth0) But all the setting are related to (eth0) and not (tun0).

I have run the cases in Development Job Group[3], it works well.

Reference

[1] Multi_Machine setup : https://github.com/os-autoinst/openQA/blob/master/docs/Networking.asciidoc#multi-machine-tests-setup
[2] Local openQA server : http://10.100.202.37/tests/454#step/openvpn_server/12
[3] OSD : https://openqa.suse.de/tests/overview?distri=sle&version=12-SP5&build=0251&groupid=168

[4] Network Environment information:

# ovs-vsctl show
eb63ee48-c14d-4a3d-ab4f-ca63d67f90ca
Bridge "br1"
Port "tap2"
Interface "tap2"
Port "tap0"
Interface "tap0"
Port "tap1"
Interface "tap1"
Port "br1"
Interface "br1"
type: internal
ovs_version: "2.8.7"


# ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether d8:9e:f3:26:3a:8d brd ff:ff:ff:ff:ff:ff
inet 192.168.1.101/24 brd 192.168.1.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::da9e:f3ff:fe26:3a8d/64 scope link
valid_lft forever preferred_lft forever
8: tun0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 100
link/none
inet 10.100.202.37 peer 10.100.200.1/32 scope global tun0
valid_lft forever preferred_lft forever
inet6 fe80::4eb4:b807:c921:389b/64 scope link flags 800
valid_lft forever preferred_lft forever
14: tap0: mtu 1500 qdisc pfifo_fast master ovs-system state DOWN group default qlen 1000
link/ether c6:08:0a:8c:cc:d6 brd ff:ff:ff:ff:ff:ff
15: tap1: mtu 1500 qdisc pfifo_fast master ovs-system state DOWN group default qlen 1000
link/ether c6:c0:1e:34:bb:41 brd ff:ff:ff:ff:ff:ff
16: tap2: mtu 1500 qdisc pfifo_fast master ovs-system state DOWN group default qlen 1000
link/ether f2:14:d1:82:ca:b7 brd ff:ff:ff:ff:ff:ff
17: ovs-system: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether ca:28:56:56:2e:7b brd ff:ff:ff:ff:ff:ff
18: br1: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 72:0a:aa:54:63:44 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.2/15 brd 10.1.255.255 scope global br1
valid_lft forever preferred_lft forever
inet6 fe80::700a:aaff:fe54:6344/64 scope link
valid_lft forever preferred_lft forever

It is unable to use #ip l s dev tap0 up to bring them to UP


# ip r s
default via 192.168.1.1 dev eth0 proto dhcp
10.0.0.0/15 dev br1 proto kernel scope link src 10.0.2.2
10.0.0.0/8 via 10.100.200.1 dev tun0
10.100.200.1 dev tun0 proto kernel scope link src 10.100.202.37
137.65.0.0/16 via 10.100.200.1 dev tun0
147.2.0.0/16 via 10.100.200.1 dev tun0
149.44.0.0/16 via 10.100.200.1 dev tun0
151.155.128.0/17 via 10.100.200.1 dev tun0
164.99.0.0/16 via 10.100.200.1 dev tun0
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.101


# netstat -an | grep ':443'
tcp 0 0 192.168.1.101:42276 195.135.220.6:443 ESTABLISHED
tcp 0 0 192.168.1.101:53934 195.135.220.6:443 ESTABLISHED
tcp 0 0 192.168.1.101:47608 192.30.253.124:443 ESTABLISHED
tcp 0 0 192.168.1.101:47446 108.177.97.189:443 ESTABLISHED
tcp 0 0 192.168.1.101:35636 216.58.199.99:443 ESTABLISHED
tcp 0 0 192.168.1.101:48362 195.135.221.167:443 ESTABLISHED
tcp 0 0 192.168.1.101:41750 23.9.185.199:443 ESTABLISHED
tcp 0 0 192.168.1.101:38646 172.217.24.197:443 ESTABLISHED
tcp 0 0 192.168.1.101:33468 172.217.24.78:443 ESTABLISHED
tcp 0 0 192.168.1.101:59664 140.82.113.25:443 ESTABLISHED

# netstat -an | grep ':22'
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN

tcp 0 36 10.100.202.37:22 10.163.2.52:59238 ESTABLISHED
tcp6 0 0 :::22 :::* LISTEN


# nslookup dist.suse.de
Server: 10.67.0.8
Address: 10.67.0.8#53

Name: dist.suse.de
Address: 10.160.0.100
Name: dist.suse.de
Address: 2620:113:80c0:8080:10:160:0:100

Actions #1

Updated by coolo over 5 years ago

  • Category set to Support

You better ask on RC, IRC or mailing list - this is really not a good support forum

Actions #2

Updated by okurz over 5 years ago

  • Status changed from New to Feedback
  • Assignee set to okurz
  • Did you try to reproduce with two manual VMs?
  • So the VMs are running on 10.100.202.37 as well, right?
  • have you enabled NAT / IP Masquerade?
  • Which firewall solution are you using?
Actions #3

Updated by okurz over 5 years ago

okurz wrote:

  • Did you try to reproduce with two manual VMs?
  • So the VMs are running on 10.100.202.37 as well, right?

yes, as confirmed over chat.

  • have you enabled NAT / IP Masquerade?

I could login to the machine (ssh root@…), masquerading is enabled according to grep 1 /proc/sys/net/ipv4/ip_forward

  • Which firewall solution are you using?

Seems like no firewall in place. I could not find a firewall service running. Also, iptables-save does not show any output. I am not aware of setting up IP Masquerading without any firewall so I recommend you to enable one, e.g. firewalld, and setup NAT / IP Masquerade as described on http://open.qa/docs/#_multi_machine_tests_setup but adjusted for your interfaces on that server.

Actions #4

Updated by bchou over 5 years ago

okurz wrote:

okurz wrote:

  • Did you try to reproduce with two manual VMs?
  • So the VMs are running on 10.100.202.37 as well, right?

yes, as confirmed over chat.

  • have you enabled NAT / IP Masquerade?

I could login to the machine (ssh root@…), masquerading is enabled according to grep 1 /proc/sys/net/ipv4/ip_forward

  • Which firewall solution are you using?

Seems like no firewall in place. I could not find a firewall service running. Also, iptables-save does not show any output. I am not aware of setting up IP Masquerading without any firewall so I recommend you to enable one, e.g. firewalld, and setup NAT / IP Masquerade as described on http://open.qa/docs/#_multi_machine_tests_setup but adjusted for your interfaces on that server.

Yes, I follow the Configure NAT with firewalld from http://open.qa/docs/#_multi_machine_tests_setup and adjust it, I re-enable the firewall service and run it again, it looks the problem show up too.
Thanks a lot.

Actions #5

Updated by okurz over 5 years ago

Ok, I can see now that your brigde "br1" has the zone "internal" but no masquerading for internal is applied, e.g. see

# for i in external internal public trusted; do firewall-cmd --zone=$i --list-all; done
external (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: eth0 tun0
  sources: 
  services: ssh
  ports: 
  protocols: 
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

internal (active)
  target: default
  icmp-block-inversion: no
  interfaces: br1
  sources: 
  services: ssh mdns samba-client dhcpv6-client
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

I know that the documentation in http://open.qa/docs/#_multi_machine_tests_setup mentions that but I am not sure how this is supposed to be working.

Actions #6

Updated by okurz over 5 years ago

  • Status changed from Feedback to Resolved

On your machine I enabled masquerading on the internal zone which you use with firewall-cmd --zone=internal --add-masquerade and retriggered the latest tests:
http://10.100.202.37/tests/461

The test is able to access the network correctly so I made the config permanent with firewall-cmd --runtime-to-permanent. Honestly I don't fully understand why that would be necessary but I will think about improving the documentation a bit in #52499 as well.

I guess with this we can close this ticket. Reopen if there are still open points to adress.

Actions #7

Updated by bchou over 5 years ago

okurz wrote:

On your machine I enabled masquerading on the internal zone which you use with firewall-cmd --zone=internal --add-masquerade and retriggered the latest tests:
http://10.100.202.37/tests/461

The test is able to access the network correctly so I made the config permanent with firewall-cmd --runtime-to-permanent. Honestly I don't fully understand why that would be necessary but I will think about improving the documentation a bit in #52499 as well.

I guess with this we can close this ticket. Reopen if there are still open points to adress.

Hi Oliver,

Sorry for late, I spend some time to digest your comments.

I really appreciate for your great help.

Does it mean in my environment,

I need to set firewall-cmd --runtime-to-permanent --zone=internal --add-interface=br1 instead of firewall-cmd --permanent --zone=internal --add-interface=br1, and also set the internal masquerade as yes can be fixed the problem, am I correct ?

It would be also great if the documentation improving. :-)

Thanks a lot.

Actions #8

Updated by okurz over 5 years ago

bchou wrote:

Does it mean in my environment,

I need to set firewall-cmd --runtime-to-permanent --zone=internal --add-interface=br1 instead of firewall-cmd --permanent --zone=internal --add-interface=br1

No. If you would try that then firewall-cmd should complain with "Can't use stand-alone options with other options.". The parameter --runtime-to-permanent is to be called on its own. firewalld has a transient state ("runtime") and the "permanent" state. When you want to apply changes to both you need to tell that to firewalld. There are multiple options, you can just change the runtime config and apply that to the permanent one or you set the permanent one and reload firewall rules or even reboot to apply the permanent state, e.g. the following are equivalent:

firewall-cmd --zone=internal --add-interface=br1
firewall-cmd --permanent --zone=internal --add-interface=br1

and

firewall-cmd --zone=internal --add-interface=br1
firewall-cmd --runtime-to-permanent

and

firewall-cmd --permanent --zone=internal --add-interface=br1
reboot

, and also set the internal masquerade as yes can be fixed the problem, am I correct ?

yes, this is how I fixed it.

It would be also great if the documentation improving. :-)

As soon as I understood on which zones the masquerading needs to be set we can improve that. So far unfortunately I am not feeling very confident myself though.

Actions #10

Updated by okurz over 5 years ago

correct. https://github.com/os-autoinst/openQA/pull/2245 was created to update docs accordingly.

Actions #11

Updated by bchou over 5 years ago

Great. :) Really appreciate for your helps.
I will keep following the ticket later.
Thank you.

Actions #12

Updated by bchou over 5 years ago

okurz wrote:

correct. https://github.com/os-autoinst/openQA/pull/2245 was created to update docs accordingly.

Hello Oliver,

I just added the fips_setup into my test sequence,

I met another the resolve problem about
"can not resolve : scc.suse.com" and
"can not resolve : updates.suse.com"

the external interface has been set masquerade: yes , but I have no idea why..

On the other hand, the dist.suse.de can be resolved successfully.

I think the setting should be correct after configure via the new updated document.

# for i in external internal public trusted; do firewall-cmd --zone=$i --list-all; done

external (active)
target: ACCEPT
icmp-block-inversion: no
interfaces: eth0 tun0
sources:
services: ssh
ports:
protocols:
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:

internal (active)
target: default
icmp-block-inversion: no
interfaces: br1
sources:
services: ssh mdns samba-client dhcpv6-client
ports:
protocols:
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:

Please refer to
http://10.100.202.37/tests/494#step/fips_setup/8

Thanks a lot.

Actions

Also available in: Atom PDF