1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
|
.. _step_by_step_example
***************************************************
Step-by-step example for testing ryu with OpenStack
***************************************************
Overview
========
Here is the step-by-step to test if ryu plugin/segregation works with openstack.
In this example,
#. create one user account as an admin and an user
#. create two projects and create a network tenant for each project
#. run VM instances for each projects
#. open vga console via virt-manager
#. try to ping to each VMs
Note: in this section, nova/quantum/ryu installation isn't explained.
If you don't have any experience with openstack nova, it is strongly
recommended to try plain nova and quantum with ovs plugin.
Conventions
===========
The following variable is used to show values which depends on the
configuration.
* $username: nova user account name which is used as admin and user
Probably you man want to create two account to separate admin
and user. In this example, only single account is used for
simplicity.
e.g. yamahata
* $tenant0: nova project name and tenant name.
This name is used as both nova project name and nova network
tenant name.
Here we abuse nova project name as network tenant name for
simplicity. If you'd like to more complex setting, please refer
to nova documentation.
e.g. yamahata-project-0
* $iprange0: IP ranges which is used for $tenant0
e.g. 172.17.220.0/25
* $tenant1: another project name
e.g. yamahata-project-1
* $iprange1: another IP ranges for $tenant1
e.g. 172.17.221.0/25
step-by-step testing
====================
In this example, euca2ools is used because it's handy.
The more openstack way is possible, though.
#. setup nova data base
Run the following on a nova node::
$ sudo nova-manage db sync
#. setup quantum data base
Use mysql command to connect mysql server::
$ mysql -u <admin user name> -p
Then create the quantum db and allow the agents to access it::
mysql> CREATE DATABASE ovs_quantum;
mysql> GRANT USAGE ON *.* to <user name>@'yourremotehost' IDENTIFIED BY 'newpassword';
mysql> FLUSH PRIVILEGES;
Where the database name, ovs_quantum, the user name, <user name>, and
its password, newpassword, are the one defined in the ryu plugin
configuration file, ryu.ini.
If you are using multiple compute nodes, the GRANT sentence needs to
be repeated. Or wildcard, %, can be used like::
mysql> GRANT USAGE ON *.* to <user name>@'%' IDENTIFIED BY 'newpassword';
#. Make sure all nova, quantum, ryu and other openstack components are
installed and running
Especially
* On nova compute/network node
* Ryu must be installed
* ryu quantum agent(ryu_quantum_agent.py) is put somewhere and
it must be running
* ovs bridge is configured
* on machine quantum-server is running
* Ryu must be installed
* the db server is accessible from all related servers
#. create a user on a nova node
Run the following on a nova node::
$ sudo nova-manage --flagfile=/etc/nova/nova.conf user admin $username
#. Create project, get the zipfile for the project, unextract it and create
ssh key for $tenant0
Run the following::
$ sudo nova-manage --flagfile /etc/nova/nova.conf project create $tenant0 --user=$username
$ sudo nova-manage --flagfile=/etc/nova/nova.conf project create $tenant0 $username ./$tenant0.zip
$ sudo unzip ./$tenant0.zip -d $tenant0
$ source ./$tenant0/novarc
$ euca-add-keypair mykey-$tenant0 > mykey-$tenant0.priv
#. do the same of the above step for $tenant1
#. create networks for each projects
Run the followings::
$ sudo nova-manage --flagfile=/etc/nova/nova.conf network create --label=$tenant0 --fixed_range_v4=$iprange0 --project_id=$tenant0
$ sudo nova-manage --flagfile=/etc/nova/nova.conf network create --label=$tenant1 --fixed_range_v4=$iprange1 --project_id=$tenant1
#. register image file
Get the vm image from somewhere (or create it by yourself) and register it.
The easiest way is to get the image someone has already created. You can find
links from the below.
* `Getting Images that Work with OpenStack <http://wiki.openstack.org/GettingImages>`_.
* `ttylinux by Scott Moser <http://smoser.brickies.net/ubuntu/ttylinux-uec/>`_.
In this example we use the ttylinux image just because its size is small::
$ wget http://smoser.brickies.net/ubuntu/ttylinux-uec/ttylinux-uec-i686-12.1_2.6.35-22_1.tar.gz
$ cloud-publish-tarball ttylinux-uec-i686-12.1_2.6.35-22_1.tar.gz <bucket-name>
$ euca-register <bucket-name>/ttylinux-uec-amd64-12.1_2.6.35-22_1.img.manifest.xml
Now you get the image id, ari-xxx, aki-xxx and ami-xxx, where xxx is
replaced with some id number.
Depending on which distribution you use, you need to use other command like
uec-publish-tarball.
If you customize images, you may have to use commands like euca-bundle-image,
euca-upload-image, euca-register.
Or you want to go for more openstack way, glance command is your friend
to create/register image.
#. run instances
boot instances for each projects.
In order to test network segregation, 2 or more VM instances need to
be created::
$ source ./$tenant0/novarc
$ euca-run-instances ami-<id which you get above> -k mykey-$tenant0 -t m1.tiny
# repeat euca-run-instances for some times.
$ source ./$tenant1/novarc
$ euca-run-instances ami-<id which you get above> -k mykey-$tenant1 -t m1.tiny
#. check if VM instances are created
Get the list of VM instances you've created and their assigned IP address::
$ euca-describe-instances
#. login VM instances and try ping/traceroute
In plain nova case, you can login the VM instances by ssh like
"ssh -i mykey-$tenant0.priv root@$ipaddress"
However, the VM instances are segregated from the management network. So the
story differs. the easiest way to login the VM is to use virt-manager
(or virsh) on each compute nodes.
Identify on which compute node the VM is running by euca-describe-instances,
and run virt-manager on the compute node. Show the vga console by
virt-manager GUI, then you can login the VM instances.
Then try "ping <other VM IP or gateway>" or "traceroute <ip address>"
on each consoles.
#. packet capture (optional)
You can run wireshark or similar tools in order to observe what packets
are sent.
When something goes wrong
=========================
Something can go wrong sometimes unfortunately.
Database tables used by openstack nova/quantum seems very fragile.
Db can result in broken state easily. If you hit it, the easiest way is
#. stop all the related daemons
#. drop related DB and re-create them.
#. clean up OVS related stuff
OVS uses its own data base which is persistent. So reboot doesn't fix it.
The leaked resources must be released explicitly by hand.
The following command would help.::
# ip link delete <tapxxx>
# tunctl -d <tapxxx>
# ovs-vsctl del-port <br-int> <gw-xxx>
# ovs-vsctl del-port <br-int> <tapxxx>
#. restart the daemons
#. set up from the scratch.
Although you can fix it by issuing SQL manually, you have to know what you're
doing with db tables.
Appendix
========
configuration file examples
---------------------------
This section includes sample configuration files I use for convenience.
Some values need to be changed depending on your setup. For example
IP addresses/port numbers.
* /etc/nova/nova.conf for api, compute, network, volume, object-store and scheduler node
Here is the nova.conf on which all nova servers are running::
--verbose
# For debugging
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
# I set those three above for my preference.
# You don't have to set them if the default works for you
--use_deprecated-auth=true
# This depends on which authentication method you use.
--sql_connection=mysql://nova:nova@localhost/nova
# Change this depending on how MySQL(or other db?) is setup
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/local/bin/nova-dhcpbridge
# This path depends on where you install nova.
--fixed_range=172.17.220.0/16
# You have to change this parameter depending on which IPs you uses
--network_size=128
# This depends on which IPs you uses for one tenant
--network_manager=nova.network.quantum.manager.QuantumManager
--quantum_connection_host=127.0.0.1 # <IP on which quantume server runs>
# Change this according to your set up
--connection_type=libvirt
--libvirt_type=kvm
--firewall_driver=quantum.plugins.ryu.nova.firewall.NopFirewallDriver
--libvirt_ovs_integration_bridge=br-int
--libvirt_vif_type=ethernet
--libvirt_vif_driver=quantum.plugins.ryu.nova.vif.LibvirtOpenVswitchOFPRyuDriver
--libvirt_ovs_ryu_api_host=<ip address on which ryu is running>:<port>
# default 172.0.0.1:8080
--linuxnet_interface_driver=quantum.plugins.ryu.nova.linux_net.LinuxOVSRyuInterfaceDriver
--linuxnet_ovs_ryu_api_host=<ip address on which ryu is running>:<port>
# default 172.0.0.1:8080
# usually same to libvirt_ovs_ryu_api_host
--quantum_use_dhcp
* /etc/nova/nova.conf on compute nodes
I copied the above to compute node and modified it.
So it includes unnecessary values for network node. Since they don't harm,
I didn't scrub them.::
--verbose
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--use_deprecated_auth
--sql_connection=mysql://nova:nova@<IP address>/nova
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--fixed_range=172.17.220.0/16
--network_size=128
--network_manager=nova.network.quantum.manager.QuantumManager
--quantum_connection_host=<IP address on which quantum server is runniung>
--connection_type=libvirt
--libvirt_type=kvm
--libvirt_ovs_integration_bridge=br-int
--libvirt_vif_type=ethernet
--libvirt_vif_driver=quantum.plugins.ryu.nova.vif.LibvirtOpenVswitchOFPRyuDriver
--libvirt_ovs_ryu_api_host=<ip address on which ryu is running>:<port>
--linuxnet_interface_driver=quantum.plugins.ryu.nova.linux_net.LinuxOVSRyuInterfaceDriver
--linuxnet_ovs_ryu_api_host=<ip address on which ryu is running>:<port>
--firewall_driver=quantum.plugins.ryu.nova.firewall.NopFirewallDriver
--quantum_use_dhcp
--rabbit_host=<IP address on which rabbit mq is running>
--glance_api_servers=<IP address on which glance api server is running>:<port>
--ec2_host=<IP address on which ec2 api server is running>
--osapi_host=<IP address on which OpenStack api server is running>
--s3_host=<IP address on which S3 host is running>
--metadata_host=<IP address on which ec2 meta data sever is running>
* /etc/quantum/plugins.ini
This file needs to be installed on which quantum-server is running.
This file defines which quantum plugin is used::
[PLUGIN]
# Quantum plugin provider module
provider = quantum.plugins.ryu.ryu_quantum_plugin.RyuQuantumPlugin
* /etc/quantum/quantum.conf
This file needs to be installed on which quantum-server is running.
A configuration file for quantum server. I use this file as is.
* /etc/quantum/plugins/ryu/ryu.ini
This files needs to be installed on nova-compute node, nova-network node
and quantum-server node.
This file defines several setting ryu quantum plugin/agent uses::
[DATABASE]
# This line MUST be changed to actually run the plugin.
# Example: sql_connection = mysql://root:nova@127.0.0.1:3306/ovs_quantum
#sql_connection = mysql://<user>:<pass>@<IP>:<port>/<dbname>
sql_connection = mysql://quantum:quantum@172.0.0.1:3306/ovs_quantum
[OVS]
integration-bridge = br-int
# openflow-controller = <host IP address of ofp controller>:<port: 6633>
# openflow-rest-api = <host IP address of ofp rest api service>:<port: 8080>
openflow-controller = <IP address on which ryu-manager is running>:<port>
# default 127.0.0.1:6633
# This corresponds to <ofp_listen_host>:<ofp_listen_port> in ryu.conf
openflow-rest-api = <IP address on which ryu-manager is running>:<port>
# default 127.0.0.1:8080
# This corresponds to <wsapi_host>:<wsapi_port> in ryu.conf
* /etc/ryu/ryu.conf
This file needs to be installed on which ryu-manager is running.
If you use default configurations, you don't have to modify it.
Just leave it blank::
# Sample configuration file
#--wsapi_host=<hostip>
#--wsapi_port=<port:8080>
#--ofp_listen_host=<hostip>
#--ofp_listen_port=<port:6633>
|