Closed
Conversation
Project: openstack-dev/devstack d82e793c8ad9245004a90f3192987c8a1caf296b Add missing config option os_auth_url Without this option will appear as an error: Skip interval_task because Keystone error: Authorization Failed: Unable to establish connection to http://localhost:5000/v2.0/tokens Closes-Bug: #1325383 Change-Id: I42fe92596d9d475f2c5b2a6aa6b49d2b7b821c54
Project: openstack/requirements 3d28e020d05a032d71810316844278a3929312db Move pep8 to just 1.5.7 pep8 is just a bug fix release beyond, there are no new rules that were not in pep8 1.5.6. This move to 1.5.7 will help us move to flake8 2.2.0 which has no additional rules added since flake8 2.1.0 but has parallelization turned on by default. Change-Id: I872a180eb066b4b85620bc03bce319345d004129
Project: openstack-infra/devstack-gate 5722542375705b2b878317cc878680623b711a2e double up [ for safety The construct [ "$UNSET_VAR" -eq "1" ] will throw a bash error of 'integer expression expected' if UNSET_VAR doesn't exist. In bash we can protect this by doubling up the brackets. Change-Id: I261212c9d225b756a82214aa28d9e43e5d89fe55
Project: openstack-dev/devstack f0945467265cfbf7e1614249c16303902e028244 use setup_install for django_openstack_auth This is fundamentally a library. As such we should setup_install so that we can possibly pin it, otherwise we *always* get the git version instead. Change-Id: Ia815f2675cf535bb05a7e8eda853690171559b86
Project: openstack-dev/devstack e33379658ffc97ffa82117e5dc35f6eb01bde951 Revert "Build retry loop for screen sessions" This reverts commit 0afa912e99dc9bad8b490960beb8f0cf85750dcc. This possibly made things worse, though it times in with the trusty add, so it's hard to tell. Revert to see if grenade gets better. Change-Id: Ic399957fc9d4a7da28b030cdf895df061b2567c8 Related-Bug: #1331274
Project: openstack/oslo.messaging 555eb7980bdfa4b1df17f1cab6532e51b0ecc955 Fix the notifier example The notifier should be instantiated using messaging.Notifier, not notifier.Notifier. Change-Id: Id1930df6b758b292a7591e2e4cba2ef5a313a3cb
Project: openstack/python-novaclient bc453f0cb07106a93b40054d82e949cfff10d629 Convert hosts tests to httpretty Change-Id: Ib1cdb508ef04a86350d3890c14e181b6ed1177f2 blueprint: httpretty-testing
Project: openstack/python-novaclient e84c4e5d958c05e865358d967d94d9ca1d94f6be Convert Hypervisor tests to httpretty Change-Id: I0dc0167af618e88f76ace9b893b2b26966903457 blueprint: httpretty-testing
Project: openstack/python-novaclient 6120158e62284acfe81914a8081bdaabcb428162 Convert image tests to httpretty Change-Id: I3abc51ba4dcc641b72e3ac5e09955e4b22718451 blueprint: httpretty-testing
Project: openstack/python-novaclient 2b946ea630072a5901a05bae517e3b31e76f93d8 Convert keypair tests to httpretty Change-Id: I6876d97dd6600a0a34b89d9f693078f495085622 blueprint: httpretty-testing
Project: openstack/python-novaclient c0f45fdb744941f0f8b42d80c204b51fcdcbe11f Convert limit tests to httpretty Change-Id: I10e0357e0f79c009a00759fd22e6148d10b5286d blueprint: httpretty-testing
Project: openstack/python-novaclient 9a48cf8063415ec02c9be290f9d9e8019cc1f59a
Overhaul bash-completion to support non-UUID based IDs
There are a few things currently wrong with bash-completion as it stands now:
1) IDs are currently required to be UUIDs. This is an arbitrary limitation
and doesn't make sense for certain kinds of objects, like `Flavors`
where a valid ID could be `performance-16gb`.
2) The code is spread out between Oslo's `Resource` and Novaclient's
`Manager` class. This makes it difficult to improve the code because it
requires changes to two separate projects. We should centralize the
code in Novaclient until the API is stable, then import the code into
Oslo in its entirety, not partially like it is now.
3) The completion code is handled by the `Manager` of which there is one
per Resource-type. In the interest of centralizing this functionality,
we should create a `CompletionCache` class and hang it off of `Client`
of which there is one-per-session.
4) The completion-code currently runs by default even in headless mode
(e.g. novaclient without the shell). It'd be much more efficient to
only write to the completion cache if we're accessing the `Client` from
the novaclient shell. We can make this an option to support third-party
CLI clients that want to use the completion-cache as well.
NOTE:
* The corresponding Oslo patch is here:
https://review.openstack.org/#/c/101376/
* This patch was tested in multithreaded mode to prevent any regression
from:
https://bugs.launchpad.net/python-novaclient/+bug/1213958.
Change-Id: Idada83de103358974b739f81d4f392574f9e1237
Closes-Bug: 1332270
Project: openstack/python-novaclient 5a3ca61cfdfb4aca20cdc0f571ede1ba7f6f7e11 Sync Oslo's apiclient Oslo's version of apiclient fixes a bug where if `human_id` is `None` it causes `novaclient` to crash, so lets sync it over to fix that bug. Change-Id: I53f174a1d3356c4038dcbdf88f4f9c4ea179418c References-Bug: 1288397
Project: openstack/glance b7968cfa93db6ccf37a97169b23f31a1993803aa Document registry 'workers' option Currently the 'workers' option -- to increase the number of service processes -- is only documented for the API. This change adds the equivalent documentation for the registry. DocImpact Change-Id: I0cee0d284eef9ce5dcb26720499f2c4d31eaca0f Closes-Bug: #1334711
Project: openstack/python-keystoneclient 4a7c7944d7fffb9f048f790a781f5bc976b107f0 Updated from global requirements Change-Id: Ibb290d0f2d616b7730914bfc829a7f555ba0b688
Project: openstack/python-keystoneclient 3d4119d27fcffaebe13e5930bbad7c398a5eae0e Add issued handlers to auth_ref and fixtures issued_at is a standard part of V2 and V3 tokens so add it to AccessInfo in a similar way to expiry. Also it should be included when generating tokens so include it in fixtures. Change-Id: I0d62d8ce6472466886751e10e98046b8e398e079
Project: openstack/python-keystoneclient cf7e8afb2fe9477ec3d20ed57fc62658a4f64f63 Correcting using-api-v2.rst Changes * removed extraneous word from Introduction section Closes-Bug: #1334915 Change-Id: I201ddb70a4d91e0d615e322abc43848993dee573
Project: openstack/oslo.messaging 3578338f3a7b29c052e2b8e357940a25a22b54ed Fix structure of unit tests in oslo.messaging (part 2) Even in case of libraries and when they are not big like nova. It is better to use next rules: 1) structure of tests directory is the same as a root of project Tests directory will be well organized, and it will be simple to find where to write tests 2) names for end modules should be test_<name_of_testing_module>.py Change-Id: I069121c5f32bbe51c6795e51c23ff3630fcd43e2
Project: openstack/oslo.messaging 4db3cebe23e7505ca436a4117e3649a5e6f18d27 Fix structure of unit tests in oslo.messaging (part 3 last) Even in case of libraries and when they are not big like nova. It is better to use next rules: 1) structure of tests directory is the same as a root of project Tests directory will be well organized, and it will be simple to find where to write tests 2) names for end modules should be test_<name_of_testing_module>.py Change-Id: I75b4b3df4fffc8dfe389c547ae7a004d1b278ecc
Project: openstack/python-keystoneclient 90abb4cfb2c133fda1df5da11d8fc30ec9e5514b Minor grammatical fix in doc Change-Id: I0ee386588ab3083d90c5da44337460e39fa86e83 Closes-Bug: #1334915
Project: openstack-infra/devstack-gate b34c0a1480b565086eb88b07d7ab201e2387d597 Capture QEMU logs This commit adds support for capturing any qemu logs. We were previously capturing the libvirt logs, but for debugging it is useful to also have the qemu logs. Change-Id: I0d14074430a6a8ec6d99722646225f4ca3262080
Project: openstack/python-novaclient 6aa419b82ed6e0b06599eeaf69581e995116669d Adds clarification note for project_id vs tenant_id The client __init__ method takes both a project_id and tenant_id which is rather confusing as in the Nova API these terms are used interchangeably. The comment clarifies the difference between a project_id and tenant_id when using novaclient. For backwards compatibility reasons we can't really change the names (though for V3 perhaps we should in the future). Change-Id: I569fe9318335c8d686153b0936205cb190e01ef1
Project: openstack/oslo.messaging a704a3f1fcbb0e357d2fead8405c03c7586fa197 Use assertEqual instead of assertIs for strings Checking identity of strings is not a great idea. It might not work with different implementations. (eg. pypy) Test usual equality instead. Change-Id: Ib1a673a0ac116f2c80d066e94a8bd9a9ef8f518a
Project: openstack/python-glanceclient 195384223f490db1886ebfab4ceee6e4bcd1c387 Fix CA certificate handling If --os-cacert was passed to the cli the following error was triggered: "cafile must be None or a byte string". This is because 'load_verify_locations' requires a byte string to be passed in. We fix this by explicitly converting the argument to a byte string. We do this in 'VerifiedHTTPSConnection' rather than sooner, eg during arg handling, as it will no longer be required should we move to a different http library (eg requests). Fixes bug 1301849. Change-Id: I9014f5d040cae9f0b6f03d8f13de8419597560cb
Project: openstack/python-glanceclient 4ea01682daafe8d9ffd6eb83ff8fd7337c6029b8 Add license to setup.cfg Glance client's setup.cfg was missing the license attribute. This commit adds it to make it consistent with other clients and server libraries. The value of the license attribute reflects the license in the LICENSE file. Change-Id: Ia2e8c3be4fe7eaf0db5eb397646068c83076c2ff
Project: openstack/python-glanceclient 6d4a4b7ecb31b43287fe19ae31246f7645102409 Add missing classifiers This commit adds 2 more classifiers to setup.cfg. An environment classifier that specifies glanceclient is a console tool and a development classifier that specifies it is production ready. Change-Id: Ia60ea76798503b0a729c384298f1a633d695a1ab
Project: openstack/python-glanceclient abd0812d05e456e68af4d8bed04396a8815884b3 Add wheels section to the setup.cfg Glance client's setup.cfg was missing the wheels section. This commit adds it and makes the client's setup.cfg consistent with other clients. Change-Id: I16030c0379dae3c3c07bd73f09798c2160310811
Project: openstack/python-cinderclient 9485337b0eb355236244858d2409016fbdb85b1d Use region_name in service catalog Using attr and filter is no longer necessary. We provide a region_name filter directly that works with both v2 and v3 service catalogs. Change-Id: I67b50fcaa5e4df5c2bb7b2966b5ef2040e6286e7
Project: openstack/python-glanceclient caa1a54fd9e69bc2a3fe43157a9805a71c2ef1a3 Added release notes for 0.13.0 Change-Id: I51374661a5ce58cd2a970a75893b1251ab6e176b
Project: openstack/oslo.config afab1f52661be27bcc61f3f516731e523ff9ef49 Introduce Opts for IP addresses In order to validate correct input values for IP addresses new Opt was introduced. Validation is done on parsing level so there is no need to explicitly check for valid ip address in the code. Requirement for netaddr package was added. DocImpact Change-Id: I9adc30d9b989e8c636fefd435885c4c363ca540c Partial-Bug: #1284684
openstack-gerrit
pushed a commit
that referenced
this pull request
Sep 10, 2015
Project: openstack/api-site 0c1f7cacd19e1175ea583c54f7a566fa1f3a74bd
Compute v2.1 docs clean up (part 7) (security_group_default_rules)
Add os-security-group-default-rules
it is based on v2 ext file.
changes are #1 remove xml file, #2 change /v2 => /v2.1
JSON samples are not edited/changed.
Also ordering is alphabetical
os-security-groups
os-security-group-default-rules <= added
os-security-group-rules
Change-Id: I4c06148fe45b32f1aa936bba14d71cd6328fe439
Partial-Bug: #1488144
openstack-gerrit
pushed a commit
that referenced
this pull request
Sep 10, 2015
Project: openstack/api-site a3df332507cadc3d75f15f43f6de0016424d62ec
Compute v2.1 docs clean up (part 8) (fixed_ips)
Add os-fixed-ips
it is based on v2 ext file
Changes are #1 remove link to xml sample file
#2 change /v2 => /v2.1
JSON samples are not edited/changed.
Change-Id: I8da2147514bc4940532953951f3310dc6ba0fef7
Partial-Bug: #1488144
openstack-gerrit
pushed a commit
that referenced
this pull request
Oct 3, 2015
Project: openstack/swift c799d4de5296056b06e08d8025488472cfcb7d66
Validate against duplicate device part replica assignment
We should never assign multiple replicas of the same partition to the
same device - our on-disk layout can only support a single replica of a
given part on a single device. We should not do this, so we validate
against it and raise a loud warning if this terrible state is ever
observed after a rebalance.
Unfortunately currently there's a couple not necessarily uncommon
scenarios which will trigger this observed state today:
1. If we have less devices than replicas
2. If a server or zones aggregate device weight make it the most
appropriate candidate for multiple replicas and you're a bit unlucky
Fixing #1 would be easy, we should just not allow that state anymore.
Really we never did - if you have a 3 replica ring with one device - you
have one replica. Everything that iter_nodes'd would de-dupe. We
should just be insisting that you explicitly acknowledge your replica
count with set_replicas.
I have been lost in the abyss for days searching for a general solutions
to #2. I'm sure it exists, but I will not have wrestled it to
submission by RC1. In the meantime we can eliminate a great deal of the
luck required simply by refusing to place more than one replica of a
part on a device in assign_parts.
The meat of the change is a small update to the .validate method in
RingBuilder. It basically unrolls a pre-existing (part, replica) loop
so that all the replicas of the part come out in order so that we can
build up the set of dev_id's for which all the replicas of a given part
are assigned part-by-part.
If we observe any duplicates - we raise a warning.
To clean the cobwebs out of the rest of the corner cases we're going to
delay get_required_overload from kicking in until we achive dispersion,
and a small check was added when selecting a device subtier to validate
if it's already being used - picking any other device in the tier works
out much better. If no other devices are available in the tier - we
raise a warning. A more elegant or optimized solution may exist.
Many unittests did not meet the criteria #1, but the fix was straight
forward after being identified by the pigeonhole check.
However, many more tests were affected by #2 - but again the fix came to
be simply adding more devices. The fantasy that all failure domains
contain at least replica count devices is prevalent in both our ring
placement algorithm and it's tests. These tests were trying to
demonstrate some complex characteristics of our ring placement algorithm
and I believe we just got a bit too carried away trying to find the
simplest possible example to demonstrate the desirable trait. I think
a better example looks more like a real ring - with many devices in each
server and many servers in each zone - I think more devices makes the
tests better. As much as possible I've tried to maintain the original
intent of the tests - when adding devices I've either spread the weight
out amongst them or added proportional weights to the other tiers.
I added an example straw man test to validate that three devices with
different weights in three different zones won't blow up. Once we can
do that without raising warnings and assigning duplicate device part
replicas - we can add more. And more importantly change the warnings to
errors - because we would much prefer to not do that #$%^ anymore.
Co-Authored-By: Kota Tsuyuzaki <tsuyuzaki.kota@lab.ntt.co.jp>
Related-Bug: #1452431
Change-Id: I592d5b611188670ae842fe3d030aa3b340ac36f9
openstack-gerrit
pushed a commit
that referenced
this pull request
Dec 10, 2015
Project: openstack/swift 0553d9333ed0045c4d209065b315533a33e5d7d7 Put part-replicas where they go It's harder than it sounds. There was really three challenges. Challenge #1 Initial Assignment =============================== Before starting to assign parts on this new shiny ring you've constructed, maybe we'll pause for a moment up front and consider the lay of the land. This process is called the replica_plan. The replica_plan approach is separating part assignment failures into two modes: 1) we considered the cluster topology and it's weights and came up with the wrong plan 2) we failed to execute on the plan I failed at both parts plenty of times before I got it this close. I'm sure a counter example still exists, but when we find it the new helper methods will let us reason about where things went wrong. Challenge #2 Fixing Placement ============================= With a sound plan in hand, it's much easier to fail to execute on it the less material you have to execute with - so we gather up as many parts as we can - as long as we think we can find them a better home. Picking the right parts for gather is a black art - when you notice a balance is slow it's because it's spending so much time iterating over replica2part2dev trying to decide just the right parts to gather. The replica plan can help at least in the gross dispersion collection to gather up the worst offenders first before considering balance. I think trying to avoid picking up parts that are stuck to the tier before falling into a forced grab on anything over parts_wanted helps with stability generally - but depending on where the parts_wanted are in relation to the full devices it's pretty easy pick up something that'll end up really close to where it started. I tried to break the gather methods into smaller pieces so it looked like I knew what I was doing. Going with a MAXIMUM gather iteration instead of balance (which doesn't reflect the replica_plan) doesn't seem to be costing me anything - most of the time the exit condition is either solved or all the parts overly aggressively locked up on min_part_hours. So far, it mostly seemds if the thing is going to balance this round it'll get it in the first couple of shakes. Challenge #3 Crazy replica2part2dev tables ========================================== I think there's lots of ways "scars" can build up a ring which can result in very particular replica2part2dev tables that are physically difficult to dig out of. It's repairing these scars that will take multiple rebalances to resolve. ... but at this point ... ... lacking a counter example ... I've been able to close up all the edge cases I was able to find. It may not be quick, but progress will be made. Basically my strategy just required a better understanding of how previous algorithms were able to *mostly* keep things moving by brute forcing the whole mess with a bunch of randomness. Then when we detect our "elegant" careful part selection isn't making progress - we can fall back to same old tricks. Validation ========== We validate against duplicate part replica assignment after rebalance and raise an ERROR if we detect more than one replica of a part assigned to the same device. In order to meet that requirement we have to have as many devices as replicas, so attempting to rebalance with too few devices w/o changing your replica_count is also an ERROR not a warning. Random Thoughts =============== As usual with rings, the test diff can be hard to reason about - hopefully I've added enough comments to assure future me that these assertions make sense. Despite being a large rewrite of a lot of important code, the existing code is known to have failed us. This change fixes a critical bug that's trivial to reproduce in a critical component of the system. There's probably a bunch of error messages and exit status stuff that's not as helpful as it could be considering the new behaviors. Change-Id: I1bbe7be38806fc1c8b9181a722933c18a6c76e05 Closes-Bug: #1452431
openstack-gerrit
pushed a commit
that referenced
this pull request
Dec 14, 2015
Project: openstack-infra/project-config 213913acada17fae9705a3fa555543bf7ee46701 puppet: pin bundler to 1.10.6 Puppet OpenStack CI is hitting this bug: rubygems/bundler#4149 It affects the latest bundler release that is used in Puppet OpenStack CI jobs (and breaking syntax3 and unit3.x). The patch that fix bundler has been fixed but is not in the latest release. We had 2 options to fix it: * patch 26 modules with the dependency in Gemfile, plus backport patches in stable branches, which is around 100 patches + 100 other patches when bundler is fixed (200 patches). * patch jenkins/jobs/puppet-module-jobs.yaml to pin bundler with a previous release, until bundler has a new release. Puppet OpenStack group decided to choose option #2 and fix it by pinning Bundler to 1.10.6. Bundler team does not seems ready to provide a release: rubygems/bundler#4149 (comment) Change-Id: I907c595be0e7483eb0c3db545e36bf036d601b6d Closes-Bug: #1525929
openstack-gerrit
pushed a commit
that referenced
this pull request
Feb 9, 2016
Project: openstack-infra/project-config 2afd7a9de884369e1c5421a7d3f7e97d2cf7e925 Experimental jobs for testing against oslo.* from master - Take #2 In I82ef7a5d5a3c1076efbd4306b7099cd28960cd90, we added periodic jobs for Nova to test py27 and py34 tox targets against oslo.* from master branch on a daily basis. In this review we are expanding the set to some more projects that oslo releases tend to break :) Change-Id: Ibdfc03f27450a5392acc276f98bfb464f9a0f663
openstack-gerrit
pushed a commit
that referenced
this pull request
Feb 23, 2016
Project: openstack/keystone 135907c121758ca6bccf6c74a333f1a90c228007
Shadow users - Separate user identities
"Shadow users: unified identity" implementation:
Separated user identities from their locally managed credentials by
refactoring the user table into user, local_user, and password tables.
user -> local_user -> password
-> federated_user
-> ...
(identity) -> (credentials)
Migrated data from the user table to the local_user and password tables.
Modify backend code to utilize the new tables.
Note: #2 "Shadow LDAP and federated users" will be completed in a
different patch. The federated_user table will be added with that patch.
bp shadow-users
Change-Id: I0b6c188824e856d788fe7156e4a9dc2a04cdb6f8
openstack-gerrit
pushed a commit
that referenced
this pull request
May 18, 2016
Project: openstack/manila 7202a75691881c2b03896463981bd0a5ba14f804
Fix context decorator usage in DB API
Manila's sqlalchemy API has decorators to require
a context or admin context argument to its DB
methods.
Add missing context-check decorators where context
argument is required in accord with the following
principles:
1. Private methods should begin with underscore and
and public methods should not.
2. All public methods should have appropriate context
requirement decorators.
3. No private methods have context requirement
decorators since these are redundant if
principle #2 is enforced.
Correct unit tests that inappropriately called these
methods without context as well.
Closes-Bug: #1580690
Change-Id: Ic448d40ef83a02837dd9bc2c6465080387305ca1
openstack-gerrit
pushed a commit
that referenced
this pull request
Mar 22, 2017
Project: openstack/openstack-manuals ec9b22b9c8e091e67f5f3d31239068a2e39c2f21 Fix note on specifying a specific host for nova The parsing goes like this in Nova: 1. --availability-zone ZONE (no host or node) 2. --availabllity-zone ZONE:HOST (no node) 3. --availability-zone ZONE::NODE (no host) 4. --availability-zone ZONE:HOST:NODE So we need to fix the docs to match case #2. Change-Id: Iedb8d221d0a33f18a7e4e10dff7b35823eef90a7 Closes-Bug: #1673252
openstack-gerrit
pushed a commit
that referenced
this pull request
Mar 29, 2017
Project: openstack/glance 327682e8528bf4effa6fb16e8cabf744f18a55a1 Fix incompatibilities with WebOb 1.7 WebOb 1.7 changed [0] how request bodies are determined to be readable. Prior to version 1.7, the following is how WebOb determined if a request body is readable: #1 Request method is one of POST, PUT or PATCH #2 ``content_length`` length is set #3 Special flag ``webob.is_body_readable`` is set The special flag ``webob.is_body_readable`` was used to signal WebOb to consider a request body readable despite the content length not being set. #1 above is how ``chunked`` Transfer Encoding was supported implicitly in WebOb < 1.7. Now with WebOb 1.7, a request body is considered readable only if ``content_length`` is set and it's non-zero [1]. So, we are only left with #2 and #3 now. This drops implicit support for ``chunked`` Transfer Encoding Glance relied on. Hence, to emulate #1, Glance must set the the special flag upon checking the HTTP methods that may have bodies. This is precisely what this patch attemps to do. [0] Pylons/webob#283 [1] https://github.com/Pylons/webob/pull/283/files#diff-706d71e82f473a3b61d95c2c0d833b60R894 Closes-bug: #1657459 Closes-bug: #1657452 Co-Authored-By: Hemanth Makkapati <hemanth.makkapati@rackspace.com> Change-Id: I19f15165a3d664d5f3a361f29ad7000ba2465a85
openstack-gerrit
pushed a commit
that referenced
this pull request
Mar 29, 2017
Project: openstack/glance 327682e8528bf4effa6fb16e8cabf744f18a55a1 Fix incompatibilities with WebOb 1.7 WebOb 1.7 changed [0] how request bodies are determined to be readable. Prior to version 1.7, the following is how WebOb determined if a request body is readable: #1 Request method is one of POST, PUT or PATCH #2 ``content_length`` length is set #3 Special flag ``webob.is_body_readable`` is set The special flag ``webob.is_body_readable`` was used to signal WebOb to consider a request body readable despite the content length not being set. #1 above is how ``chunked`` Transfer Encoding was supported implicitly in WebOb < 1.7. Now with WebOb 1.7, a request body is considered readable only if ``content_length`` is set and it's non-zero [1]. So, we are only left with #2 and #3 now. This drops implicit support for ``chunked`` Transfer Encoding Glance relied on. Hence, to emulate #1, Glance must set the the special flag upon checking the HTTP methods that may have bodies. This is precisely what this patch attemps to do. [0] Pylons/webob#283 [1] https://github.com/Pylons/webob/pull/283/files#diff-706d71e82f473a3b61d95c2c0d833b60R894 Closes-bug: #1657459 Closes-bug: #1657452 Co-Authored-By: Hemanth Makkapati <hemanth.makkapati@rackspace.com> Change-Id: I19f15165a3d664d5f3a361f29ad7000ba2465a85
openstack-gerrit
pushed a commit
that referenced
this pull request
Apr 20, 2017
Project: openstack/nova 3cc34c2aa3d7bae8863dd3238ecea2765bb6c855 PowerVM Driver: spawn/destroy #2: functional Building on Ic45bb064f4315ea9e63698a7c0e541c5b0de5051, this change set makes the spawn and destroy methods functional in a basic way. A subsequent change will introduce TaskFlow framework. The VMs still have no network or storage - those will be coming in future change sets. Change-Id: I85f740999b8d085e803a39c35cc1897c0fb063ad Partially-Implements: blueprint powervm-nova-compute-driver
openstack-gerrit
pushed a commit
that referenced
this pull request
Apr 26, 2017
Project: openstack/neutron 03c5283c69f1f5cba8a9f29e7bd7fd306ee0c123 use neutron-lib callbacks The callback modules have been available in neutron-lib since commit [1] and are ready for consumption. As the callback registry is implemented with a singleton manager instance, sync complications can arise ensuring all consumers switch to lib's implementation at the same time. Therefore this consumption has been broken down: 1) Shim neutron's callbacks using lib's callback system and remove existing neutron internals related to callbacks (devref, UTs, etc.). 2) Switch all neutron's callback imports over to neutron-lib's. 3) Have all sub-projects using callbacks move their imports over to use neutron-lib's callbacks implementation. 4) Remove the callback shims in neutron-lib once sub-projects are moved over to lib's callbacks. 5) Follow-on patches moving our existing uses of callbacks to the new event payload model provided by neutron-lib.callback.events This patch implements #2 from above, moving all neutron's callback imports to use neutron-lib's callbacks. There are also a few places in the UT code that still patch callbacks, we can address those in step #4 which may need [2]. NeutronLibImpact [1] fea8bb64ba7ff52632c2bd3e3298eaedf623ee4f [2] I9966c90e3f90552b41ed84a68b19f3e540426432 Change-Id: I8dae56f0f5c009bdf3e8ebfa1b360756216ab886
openstack-gerrit
pushed a commit
that referenced
this pull request
Jun 14, 2017
Project: openstack/neutron ce33de5f518a807e0c01ccf7aa90682f0d24a5da Do not defer allocation if fixed-ips is in the port create request. Fix a usage regression, use case #2 in Nova Neutron Routed Networks spec https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/neutron-routed-networks.html Currently ip allocation is always deferred if binding:host_id is not specified when routed networks are used. This causes initial fixed-ips data provided by the user to be _lost_ unless the user also specify the host. Since the user specified the IP or Subnet to allocate in, there is no reason to defer the allocation. a) It is a common pattern, especially in Heat templates to: 1. Create a port with fixed-ips specifying a subnet. 2. Create a server and associate the existing port. b) It is also common to use Neutron IPAM as a source to get VIP addresses for clusters on provider networks. This change enables these use cases with routed networks. DocImpact: "The Networking service defers assignment of IP addresses to the port until the particular compute node becomes apparent." This is no longer true if fixed-ips is used in the port create request. Change-Id: I86d4aafa1f8cd425cb1eeecfeaf95226d6d248b4 Closes-Bug: #1695740
openstack-gerrit
pushed a commit
that referenced
this pull request
Oct 10, 2017
* Update governance from branch 'master'
- Merge "Adjustments to Infra contributors top-5 entry"
- Adjustments to Infra contributors top-5 entry
Now that If344bb3862d0b54a37cd28933ef1f01ad075ba31 has merged, make
non-blocking adjustments requested by reviewers there:
Move the Infra contributors entry from slot #3 to #2 since there
seems to be some perceived prioritization at work, and the Glance
situation is now reported to be less dire.
Document myself as the TC sponsor for the Infra contributors entry.
Change-Id: I50d13e02c9fa4bd1c36d61529c89efdec1865e31
openstack-gerrit
pushed a commit
that referenced
this pull request
Oct 22, 2017
* Update project-config from branch 'master'
- Merge "Zuul-v3: add required projects for neutron-lib periodic jobs"
- Zuul-v3: add required projects for neutron-lib periodic jobs
Before moving away from the legacy/deprecated way of doing things,
bring sanity back to periodic runs for neutron-lib related projects.
This is step 1 of 2 (#2 to fix Grafana).
Depends-on: Ic05f2f9484af05b2b2d0fe143e0bb87700e116fd
Change-Id: I2950d41887b9820e9e4785650c3743792e5c73c3
openstack-gerrit
pushed a commit
that referenced
this pull request
Dec 13, 2017
* Update project-config from branch 'master' - Merge "base/multinode rename #2: Rename project-config jobs" - base/multinode rename #2: Rename project-config jobs We want to rename the base and multinode integration jobs in order to make sure they are not mistaken by users trying to test their projects. These jobs are not meant to be used outside of integration testing the playbooks and roles found in project-config, zuul-jobs and openstack-zuul-jobs. This is part two of three in renaming the base and multinode integration jobs. We need to: 1) Add new jobs in openstack-zuul-jobs 2) Make project-config use the new job names <-- We're here 3) Remove old jobs Depends-On: I150a0a6dacc7b8862a40f1382ea730957dea0faf Change-Id: I4ef44e64a03cc3089e02343de506e0a6fd85a55c
openstack-gerrit
pushed a commit
that referenced
this pull request
Feb 12, 2019
* Update nova from branch 'master' - Merge "Follow up (#2) for the bw resource provider series" - Follow up (#2) for the bw resource provider series This addresses review comments from the following changes: I61a3e8902a891bac36911812e4e7c080570e3850 I48e6db9693e470b177bf4c75211d8b883c768433 Ic70d2bb781b6a844849a5cf2fe4d271b5a81093d I5a956513f3485074023e027430cc52ee7a3f92e4 Ica6152ccb97dce805969d964d6ed032bfe22a33f Part of blueprint bandwidth-resource-provider Change-Id: Idffaa6d206cda3f507e6be095356537f22302ad7
openstack-mirroring
pushed a commit
that referenced
this pull request
Aug 22, 2020
* Update tripleo-heat-templates from branch 'master'
- Merge "Fix pcs restart in composable HA"
- Fix pcs restart in composable HA
When a redeploy command is being run in a composable HA environment, if there
are any configuration changes, the <bundle>_restart containers will be kicked
off. These restart containers will then try and restart the bundles globally in
the cluster.
These restarts will be fired off in parallel from different nodes. So
haproxy-bundle will be restarted from controller-0, mysql-bundle from
database-0, rabbitmq-bundle from messaging-0.
This has proven to be problematic and very often (rhbz#1868113) it would fail
the redeploy with:
2020-08-11T13:40:25.996896822+00:00 stderr F Error: Could not complete shutdown of rabbitmq-bundle, 1 resources remaining
2020-08-11T13:40:25.996896822+00:00 stderr F Error performing operation: Timer expired
2020-08-11T13:40:25.996896822+00:00 stderr F Set 'rabbitmq-bundle' option: id=rabbitmq-bundle-meta_attributes-target-role set=rabbitmq-bundle-meta_attributes name=target-role value=stopped
2020-08-11T13:40:25.996896822+00:00 stderr F Waiting for 2 resources to stop:
2020-08-11T13:40:25.996896822+00:00 stderr F * galera-bundle
2020-08-11T13:40:25.996896822+00:00 stderr F * rabbitmq-bundle
2020-08-11T13:40:25.996896822+00:00 stderr F * galera-bundle
2020-08-11T13:40:25.996896822+00:00 stderr F Deleted 'rabbitmq-bundle' option: id=rabbitmq-bundle-meta_attributes-target-role name=target-role
2020-08-11T13:40:25.996896822+00:00 stderr F
or
2020-08-11T13:39:49.197487180+00:00 stderr F Waiting for 2 resources to start again:
2020-08-11T13:39:49.197487180+00:00 stderr F * galera-bundle
2020-08-11T13:39:49.197487180+00:00 stderr F * rabbitmq-bundle
2020-08-11T13:39:49.197487180+00:00 stderr F Could not complete restart of galera-bundle, 1 resources remaining
2020-08-11T13:39:49.197487180+00:00 stderr F * rabbitmq-bundle
2020-08-11T13:39:49.197487180+00:00 stderr F
After discussing it with kgaillot it seems that concurrent restarts in pcmk are just brittle:
"""
Sadly restarts are brittle, and they do in fact assume that nothing else is causing resources to start or stop. They work like this:
- Get the current configuration and state of the cluster, including a list of active resources (list #1)
- Set resource target-role to Stopped
- Get the current configuration and state of the cluster, including a list of which resources *should* be active (list #2)
- Compare lists #1 and #2, and the difference is the resources that should stop
- Periodically refresh the configuration and state until the list of active resources matches list #2
- Delete the target-role
- Periodically refresh the configuration and state until the list of active resources matches list #1
"""
So the suggestion is to replace the restarts with an enable/disable cycle of the resource.
Tested this on a dozen runs on a composable HA environment and did not observe the error
any longer.
Closes-Bug: #1892206
Change-Id: I9cc27b1539a62a88fb0bccac64e6b1ae9295f22e
openstack-mirroring
pushed a commit
that referenced
this pull request
May 11, 2021
* Update designate-tempest-plugin from branch 'master'
to 3675bd53b0894903abf47e768bb045160c687284
- Merge "New API test cases for a Zone test suite."
- New API test cases for a Zone test suite.
"test_get_primary_zone_nameservers"
1) Create a PRIMARY Zone
2) Retrive Zone Name Servers and validate that not empty
3) Get zone's "pool_id"
3) Make sure that the zone's Name Servers retrieved in #2
are the same as created in zone'a pool.
"test_create_zones" scenario"
1) Create PRIMARY zone and validate the creation
2) Get the Name Servers created in PRIMARY zone and extract hosts list.
Hosts list is used to provide "masters" on SECONDARY zone creation
3) Create a SECONDARY zone and validate the creation
# Note: the existing test was modified to cover both types:
PRIMARY and SECONDARY
"test_manually_trigger_update_secondary_zone_negative"
1) Create a Primary zone
2) Get the nameservers created in #1 and make sure that
those nameservers are not available (pingable)
3) Create a secondary zone
4) Manually trigger zone update and make sure that
the API fails with status code 500 as Nameservers aren’t available.
"test_zone_abandon"
1) Create a zone
2) Show a zone
3) Make sure that the created zone is in: Nameserver/BIND
4) Abandon a zone
5) Wait till a zone is removed from the Designate DB
6) Make sure that the zone is still in Nameserver/BIND
"test_zone_abandon_forbidden"
1) Create a zone
2) Show a zone
3) Make sure that the created zone is in: Nameserver/BIND
4) Abandon a zone as primary tenant (not admin)
5) Make sure that the API fails with: "403 Forbidden"
Change-Id: I6df991145b1a3a2e4e1d402dd31204a67fb45a11
openstack-mirroring
pushed a commit
that referenced
this pull request
Apr 27, 2022
* Update kuryr-kubernetes from branch 'master'
to b7e87c94b1a9af467806c297975b80cd8ff40de1
- Merge "Pools: Fix order of updated SGs"
- Pools: Fix order of updated SGs
According to the comments in vif_pool.py, if there are no ports in the
pool with the requested SG set, we should update the SG on another port,
starting from the ones that were created soonest. I think this logic is
to make sure we grab the ports with most outdated SGs. Anyway that code
is currently broken because of two issues:
1. _last_update dict is always updated by replacing whole dict,
basically meaning that it's only holding data for SG that got updated
most recently.
2. There's a race condition where in _get_port_from_pool multiple
threads can steal a port from themselves.
This commit solves #2 by switching to use OrderedDict to track which SG
is the one that was used most recently. This way we can just iterate the
OrderDict when choosing which port should get its SG updated and just
choose the next port if we get IndexError on pop(). This also solves #1
because _last_update is no longer used to decide which ports have the
most outdated SGs.
Change-Id: Ia3159ee007be865db404e2dcef688abe21592553
openstack-mirroring
pushed a commit
that referenced
this pull request
Aug 2, 2022
* Update nova from branch 'master'
to cc6045a4acea0cd916b32508f66ff931032af4ea
- Merge "Remove double mocking"
- Remove double mocking
In py310 unittest.mock does not allow to mock the same function twice as
the second mocking will fail to autospec the Mock object created by the
first mocking.
This patch manually fixes the double mocking.
Fixed cases:
1) one of the mock was totally unnecessary so it was removed
2) the second mock specialized the behavior of the first generic mock.
In this case the second mock is replaced with the configuration of
the first mock
3) a test case with two test steps mocked the same function for each
step with overlapping mocks. Here the overlap was removed to have
the two mock exists independently
The get_connection injection in the libvirt functional test needed a
further tweak (yeah I know it has many already) to act like a single
mock (basically case #2) instead of a temporary re-mocking. Still the
globalness of the get_connection mocking warrant the special set / reset
logic there.
Change-Id: I3998d0d49583806ac1c3ae64f1b1fe343cefd20d
openstack-mirroring
pushed a commit
that referenced
this pull request
Aug 25, 2022
* Update nova from branch 'master'
to ccc06ac808458e009b9bee3cf8cdd43242204920
- Merge "Trigger reschedule if PCI consumption fail on compute"
- Trigger reschedule if PCI consumption fail on compute
The PciPassthroughFilter logic checks each InstancePCIRequest
individually against the available PCI pools of a given host and given
boot request. So it is possible that the scheduler accepts a host that
has a single PCI device available even if two devices are requested for
a single instance via two separate PCI aliases. Then the PCI claim on
the compute detects this but does not stop the boot just logs an ERROR.
This results in the instance booted without any PCI device.
This patch does two things:
1) changes the PCI claim to fail with an exception and trigger a
re-schedule instead of just logging an ERROR.
2) change the PciDeviceStats.support_requests that is called during
scheduling to not just filter pools for individual requests but also
consume the request from the pool within the scope of a single boot
request.
The fix in #2) would not be enough alone as two parallel scheduling
request could race for a single device on the same host. #1) is the
ultimate place where we consume devices under a compute global lock so
we need the fix there too.
Closes-Bug: #1986838
Change-Id: Iea477be57ae4e95dfc03acc9368f31d4be895343
openstack-mirroring
pushed a commit
that referenced
this pull request
Dec 13, 2022
* Update nova from branch 'master'
to 8b4104f9f78d0615720c0ba1e3e8cfced42efcc5
- Merge "Split PCI pools per PF"
- Split PCI pools per PF
Each PCI device and each PF is a separate RP in Placement and the
scheduler allocate them specifically so the PCI filtering and claiming
also needs to handle these devices individually. Nova pooled PCI devices
together if they had the same device_spec and same device type and numa
node. Now this is changed that only pool VFs from the same parent PF.
Fortunately nova already handled consuming devices for a single
InstancePCIRequest from multiple PCI pools, so this change does not
affect the device consumption code path.
The test_live_migrate_server_with_neutron test needed to be changed.
Originally this test used a compute with the following config:
* PF 81.00.0
** VFs 81.00.[1-4]
* PF 81.01.0
** VFs 81.01.[1-4]
* PF 82.00.0
And booted a VM that needed one VF and one PF. This request has two
widely different solutions:
1) allocate the VF from under 81.00 and therefore consume 81.00.0 and
allocate the 82.00.0 PF
This was what the test asserted to happen.
2) allocate the VF from under 81.00 and therefore consume 81.00.0 and
allocate the 81.00.0 PF and therefore consume all the VFs under it
This results in a different amount of free devices than #1)
AFAIK nova does not have any implemented preference for consuming PFs
without VFs. The test just worked by chance (some internal device and
pool ordering made it that way). However when the PCI pools are split
nova started choosing solution #2) making the test fail. As both
solution is equally good from nova's scheduling contract perspective I
don't consider this as a behavior change. Therefore the test is updated
not to create a situation where two different scheduling solutions are
possible.
blueprint: pci-device-tracking-in-placement
Change-Id: I4b67cca3807fbda9e9b07b220a28e331def57624
openstack-mirroring
pushed a commit
that referenced
this pull request
Sep 28, 2023
* Update releases from branch 'master' to b9e755d9a758e56251fe92c4059a87db52f87cd1 - Merge "Add release note links for 2023.2 Bobcat (#2)" - Add release note links for 2023.2 Bobcat (#2) If any of your deliverables does not have a release note link added already under deliverables/bobcat, then please check whether there is an open patch on that repository with the topic "reno-2023.2" [1] still waiting to be approved. [1] https://review.opendev.org/q/topic:reno-2023.2+is:open Change-Id: Idbc9012575bb9518ab7986e20d44fcb49f3c0b09
openstack-mirroring
pushed a commit
that referenced
this pull request
Dec 8, 2023
* Update neutron from branch 'master'
to ef139fa65be36dc08d6c017b89e5c7d14efa54e5
- Merge "Update OVN client _get_port_options() code and utils"
- Update OVN client _get_port_options() code and utils
The OVN client code in _get_port_options() was changed in
the following ways:
1) variable ip_subnets was changed to port_fixed_ips to
reflect what it actually was.
2) Instead of just passing the list of subnet dictionaries
to callers and having them iterate it, create a
"subnets by id" dictionary that can be used in multiple
places, including the OVN utilities.
3) Move the calls of get_subnets_address_scopes() and
get_port_type_virtual_and_parents() so they are only
made if there are subnets associated with the port.
The OVN utility code was changed to accept the "subnets by id"
dictionary mentioned in #2.
Functionally the code should be identical.
Required a lot of test cleanup.
Change-Id: I3dd4c283485c316df0662b5d679b6e13f65b4841
openstack-mirroring
pushed a commit
that referenced
this pull request
Apr 3, 2024
* Update releases from branch 'master' to 2eaeb691ec3c6c83f3c0fdd13236e0bdf0be180c - Add release note links for 2024.1 Caracal #2 If any of your deliverables does not have a release note link added already under deliverables/caracal, then please check whether there is an open patch on that repository with the topic "reno-2024.1" [1] still waiting to be approved. [1] https://review.opendev.org/q/topic:reno-2024.1+is:open Change-Id: I63038a7c5e33840e32c55f04bb7795fd56955104
openstack-mirroring
pushed a commit
that referenced
this pull request
Sep 23, 2024
* Update releases from branch 'master' to 42558474cde1a3ad192eeb05c4132a88317bc48a - Merge "Add release note links for 2024.2 Dalmatian #2" - Add release note links for 2024.2 Dalmatian #2 If any of your deliverables does not have a release note link added already under deliverables/dalmatian, then please check whether there is an open patch on that repository with the topic "reno-2024.2" [1] still waiting to be approved. [1] https://review.opendev.org/q/topic:reno-2024.2+is:open Change-Id: Idaefd8891934fb7dc6139dd952d5a4fd2ed49783
openstack-mirroring
pushed a commit
that referenced
this pull request
Apr 2, 2025
* Update releases from branch 'master' to 0d86510eaabd1e27ccba2fc667a288e0e06f0cb2 - Add missing release note links for 2025.1 Epoxy #2 If any of your deliverables does not have a release note link added already under deliverables/epoxy, then please check whether there is an open patch on that repository with the topic "reno-2025.1" [1] still waiting to be approved. [1] https://review.opendev.org/q/topic:reno-2025.1+is:open Change-Id: I85dbbf5b887e219dcb661777bf7f5d7da208f72a
openstack-mirroring
pushed a commit
that referenced
this pull request
Sep 19, 2025
* Update releases from branch 'master' to 724d5b5c2efe19fff547360970515968abe12de3 - Merge "Add release note links for 2025.2 Flamingo (#2)" - Add release note links for 2025.2 Flamingo (#2) If any of your deliverables misses a release note link added already under deliverables/flamingo/, then please check whether there is an open patch on that repository with the topic "reno-2025.2" [1], that is still waiting for approval. [1] https://review.opendev.org/q/topic:reno-2025.2+is:open Change-Id: I1e99807a8af18dfaf2c9e60711d8827df5137987 Signed-off-by: Előd Illés <elod.illes@est.tech>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.