Closed
Conversation
This reverts commit d49b289.
Author
|
test |
|
Thank you for contributing to openstack/openstack! openstack/openstack uses Gerrit for code review. Please visit http://wiki.openstack.org/GerritWorkflow and follow the |
openstack-gerrit
pushed a commit
that referenced
this pull request
Apr 3, 2015
Project: openstack/neutron f889ab4eafb7979b1d7995ea327a44125cd08433 Support multiple IPv6 prefixes on internal router ports (Patch set #3 for the multiple-ipv6-prefixes blueprint) Provides support for adding multiple IPv6 subnets to an internal router port. The limitation of one IPv4 subnet per internal router port remains, though a port may contain one IPv4 subnet with any number of IPv6 subnets. This changes the behavior of both the router-interface-add and router-interface-delete APIs. When router-interface-add is called with an IPv6 subnet, the subnet will be added to an existing internal port on the router with the same network ID if the existing port already has one or more IPv6 subnets. Otherwise, a new port will be created on the router for that subnet. When calling the router-interface-add with a port (one that has already been created using the port-create command), that port will be added to the router if it meets the following conditions: 1. The port has no more than one IPv4 subnet. 2. If the port has any IPv6 subnets, it must not have the same network ID as an existing port on the router if the existing port has any IPv6 subnets. If the router-interface-delete command is called with a subnet, that subnet will be removed from the router port to which it belongs. If the subnet is the last subnet on a port, the port itself will be deleted from the router. If the router-interface-delete command is called with a port, that port will be deleted from the router. This change also allows the RADVD configuration to support advertising multiple prefixes on a single router interface. DocImpact Change-Id: I7d4e8194815e626f1cfa267f77a3f2475fdfa3d1 Closes-Bug: #1439824 Partially-implements: blueprint multiple-ipv6-prefixes
openstack-gerrit
pushed a commit
that referenced
this pull request
Sep 10, 2015
Project: openstack/api-site 2850130977c6b742bbe921237350ce31aeef596f Add volume attributes description for Block Storage API For v1 #1: Add status attribute for JSON sample #2: Fix snapshot_id, source_volid description for response side. #3: Add attachments non-null sample For v2 Add volume info response attributes Change-Id: Iebe1eb2f12550d0e66bb594468ce6b28c9d3c756 Closes-Bug: #1331246
openstack-gerrit
pushed a commit
that referenced
this pull request
Nov 17, 2015
Project: openstack/governance 946c261753ab13ccc7ff385218141e9f16a2e5c1 Add Fuel to OpenStack Projects Fuel is an open source deployment and management tool for OpenStack. Fuel's mission is to streamline and accelerate the otherwise time-consuming, often complex, and error-prone process of deploying, testing and maintaining various configurations of OpenStack at scale. Fuel has been successfully used to deploy OpenStack in environments ranging from personal proof-of-concept micro-clouds to production infrastructures composed of hundreds of nodes running tens of thousands of instances, using a wide variety of network and storage backends, with out-of-the-box support of most OpenStack projects. Fuel was used by many successful competitors in the Rule The Stack competition in Vancouver: https://01.org/openstack/openstacksummitvancouverbc2015/rule-stack-vancouver-results Fuel is an enabler technology for integration of OpenStack with other cloud computing initiatives, such as Ceph, OPNFV, Kubernetes: https://drive.google.com/file/d/0BxYswyvIiAEZUEp4aWJPYVNjeU0/view https://wiki.opnfv.org/get_started/open_questions#installer_comparison http://googlecloudplatform.blogspot.com/2015/02/run-container-based-applications-on-OpenStack-with-Kubernetes.html According to OpenStack user survey before the Liberty OpenStack Summit (answers collected in March-April 2015), Fuel is the #3 deployment tool used to set up OpenStack: http://superuser.openstack.org/articles/openstack-users-share-how-their-deployments-stack-up Below is a summary of the state of Fuel project's compliance with the OpenStack Projects requirements as defined in: http://governance.openstack.org/reference/new-projects-requirements.html Alignment with OpenStack Mission: Fuel provides a REST API, a web UI, a command line interface, and plugin framework that automate and simplify deployment and operation of cloud infrastructures based on OpenStack. The OpenStack way ("the 4 opens"): Open Source: All Fuel components are licensed under Apache License v2.0. Fuel does not have library dependencies that restrict how the project may be distributed or deployed. Open Community: Fuel core reviewers are approved by contributors of the project. Fuel project holds weekly IRC meetings on OpenStack channels with public meeting agenda, minutes, and logs, since February 2014: http://eavesdrop.openstack.org/meetings/fuel/ Open Development: Fuel uses public code reviews on OpenStack infrastructure since September 2013: https://review.openstack.org/46666 Commits to all Fuel repositories are approved by core reviewers and validated with automated tests on publicly accessible CI: https://ci.fuel-infra.org/ Fuel project has appointed Andrew Woodward as a Community Ambassador to serve as a liaison with other OpenStack projects. Fuel developers actively engage other OpenStack projects for collaboration on topics ranging from high availability to storage: http://stackalytics.com/report/contribution/ha-guide/90 http://lists.openstack.org/pipermail/openstack-dev/2014-July/040484.html Fuel project is implemented primarily in Python, adjusts its Python dependencies for compliance with OpenStack global requirements, and works on reconciling its differences with OpenStack Puppet modules: https://blueprints.launchpad.net/fuel/+spec/automate-the-verification-of-compliance-openstack-global-requirement https://lwn.net/Articles/648331/ Open Design: Fuel uses openstack-dev mailing list with [Fuel] tag for development discussions. All feature development for Fuel is managed via blueprints in Launchpad and specifications in the fuel-specs repository on OpenStack infrastructure: https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Subscribe_to_mailing_lists https://blueprints.launchpad.net/fuel/ https://review.openstack.org/#/q/project:stackforge/fuel-specs,n,z Interoperability with the rest of OpenStack: Fuel REST API and its CLI and web frontends use Keystone for authentication. Fuel REST API has an overlap with Tuskar, Fuel team would be happy to work with the Tuskar community on a long-term alignment of APIs if the Technical Committee determined such a step would be preferable. Active team: As of July 7, 2015, Fuel project has 23 core reviewers posting 54.5 reviews per day, and over 200 active contributors: http://stackalytics.com/report/contribution/fuel-group/90 Change-Id: I8c6579f84a42324eaacae52e459b51c4e7b92b56
openstack-gerrit
pushed a commit
that referenced
this pull request
Dec 10, 2015
Project: openstack/swift 0553d9333ed0045c4d209065b315533a33e5d7d7 Put part-replicas where they go It's harder than it sounds. There was really three challenges. Challenge #1 Initial Assignment =============================== Before starting to assign parts on this new shiny ring you've constructed, maybe we'll pause for a moment up front and consider the lay of the land. This process is called the replica_plan. The replica_plan approach is separating part assignment failures into two modes: 1) we considered the cluster topology and it's weights and came up with the wrong plan 2) we failed to execute on the plan I failed at both parts plenty of times before I got it this close. I'm sure a counter example still exists, but when we find it the new helper methods will let us reason about where things went wrong. Challenge #2 Fixing Placement ============================= With a sound plan in hand, it's much easier to fail to execute on it the less material you have to execute with - so we gather up as many parts as we can - as long as we think we can find them a better home. Picking the right parts for gather is a black art - when you notice a balance is slow it's because it's spending so much time iterating over replica2part2dev trying to decide just the right parts to gather. The replica plan can help at least in the gross dispersion collection to gather up the worst offenders first before considering balance. I think trying to avoid picking up parts that are stuck to the tier before falling into a forced grab on anything over parts_wanted helps with stability generally - but depending on where the parts_wanted are in relation to the full devices it's pretty easy pick up something that'll end up really close to where it started. I tried to break the gather methods into smaller pieces so it looked like I knew what I was doing. Going with a MAXIMUM gather iteration instead of balance (which doesn't reflect the replica_plan) doesn't seem to be costing me anything - most of the time the exit condition is either solved or all the parts overly aggressively locked up on min_part_hours. So far, it mostly seemds if the thing is going to balance this round it'll get it in the first couple of shakes. Challenge #3 Crazy replica2part2dev tables ========================================== I think there's lots of ways "scars" can build up a ring which can result in very particular replica2part2dev tables that are physically difficult to dig out of. It's repairing these scars that will take multiple rebalances to resolve. ... but at this point ... ... lacking a counter example ... I've been able to close up all the edge cases I was able to find. It may not be quick, but progress will be made. Basically my strategy just required a better understanding of how previous algorithms were able to *mostly* keep things moving by brute forcing the whole mess with a bunch of randomness. Then when we detect our "elegant" careful part selection isn't making progress - we can fall back to same old tricks. Validation ========== We validate against duplicate part replica assignment after rebalance and raise an ERROR if we detect more than one replica of a part assigned to the same device. In order to meet that requirement we have to have as many devices as replicas, so attempting to rebalance with too few devices w/o changing your replica_count is also an ERROR not a warning. Random Thoughts =============== As usual with rings, the test diff can be hard to reason about - hopefully I've added enough comments to assure future me that these assertions make sense. Despite being a large rewrite of a lot of important code, the existing code is known to have failed us. This change fixes a critical bug that's trivial to reproduce in a critical component of the system. There's probably a bunch of error messages and exit status stuff that's not as helpful as it could be considering the new behaviors. Change-Id: I1bbe7be38806fc1c8b9181a722933c18a6c76e05 Closes-Bug: #1452431
openstack-gerrit
pushed a commit
that referenced
this pull request
Feb 9, 2017
Project: openstack-infra/project-config 85e42750381ecc6dc76b7368933051674a111bdc Step #3 - Retiring nova-docker Removing project ACL, jobs, channels Change-Id: I73c07d75dde2231b876ef9aeb4a09d2f26256bf7
openstack-gerrit
pushed a commit
that referenced
this pull request
Mar 29, 2017
Project: openstack/glance 327682e8528bf4effa6fb16e8cabf744f18a55a1 Fix incompatibilities with WebOb 1.7 WebOb 1.7 changed [0] how request bodies are determined to be readable. Prior to version 1.7, the following is how WebOb determined if a request body is readable: #1 Request method is one of POST, PUT or PATCH #2 ``content_length`` length is set #3 Special flag ``webob.is_body_readable`` is set The special flag ``webob.is_body_readable`` was used to signal WebOb to consider a request body readable despite the content length not being set. #1 above is how ``chunked`` Transfer Encoding was supported implicitly in WebOb < 1.7. Now with WebOb 1.7, a request body is considered readable only if ``content_length`` is set and it's non-zero [1]. So, we are only left with #2 and #3 now. This drops implicit support for ``chunked`` Transfer Encoding Glance relied on. Hence, to emulate #1, Glance must set the the special flag upon checking the HTTP methods that may have bodies. This is precisely what this patch attemps to do. [0] Pylons/webob#283 [1] https://github.com/Pylons/webob/pull/283/files#diff-706d71e82f473a3b61d95c2c0d833b60R894 Closes-bug: #1657459 Closes-bug: #1657452 Co-Authored-By: Hemanth Makkapati <hemanth.makkapati@rackspace.com> Change-Id: I19f15165a3d664d5f3a361f29ad7000ba2465a85
openstack-gerrit
pushed a commit
that referenced
this pull request
Mar 29, 2017
Project: openstack/glance 327682e8528bf4effa6fb16e8cabf744f18a55a1 Fix incompatibilities with WebOb 1.7 WebOb 1.7 changed [0] how request bodies are determined to be readable. Prior to version 1.7, the following is how WebOb determined if a request body is readable: #1 Request method is one of POST, PUT or PATCH #2 ``content_length`` length is set #3 Special flag ``webob.is_body_readable`` is set The special flag ``webob.is_body_readable`` was used to signal WebOb to consider a request body readable despite the content length not being set. #1 above is how ``chunked`` Transfer Encoding was supported implicitly in WebOb < 1.7. Now with WebOb 1.7, a request body is considered readable only if ``content_length`` is set and it's non-zero [1]. So, we are only left with #2 and #3 now. This drops implicit support for ``chunked`` Transfer Encoding Glance relied on. Hence, to emulate #1, Glance must set the the special flag upon checking the HTTP methods that may have bodies. This is precisely what this patch attemps to do. [0] Pylons/webob#283 [1] https://github.com/Pylons/webob/pull/283/files#diff-706d71e82f473a3b61d95c2c0d833b60R894 Closes-bug: #1657459 Closes-bug: #1657452 Co-Authored-By: Hemanth Makkapati <hemanth.makkapati@rackspace.com> Change-Id: I19f15165a3d664d5f3a361f29ad7000ba2465a85
openstack-gerrit
pushed a commit
that referenced
this pull request
Apr 25, 2017
Project: openstack/nova c61ab41711b6a1d7b884966ffd5f04b41c20a2eb PowerVM Driver: spawn/destroy #3: TaskFlow This change set builds on I85f740999b8d085e803a39c35cc1897c0fb063ad, introducing the TaskFlow framework for spawn and destroy. It should be functionally equivalent to the aforementioned, but sets us up for the more complex TaskFlow usage in subsequent additions to the PowerVM driver implementation. Change-Id: Idfefc2db18d0f473a028b7bb8b593d39067e090d Partially-Implements: blueprint powervm-nova-compute-driver
openstack-gerrit
pushed a commit
that referenced
this pull request
Oct 10, 2017
* Update governance from branch 'master'
- Merge "Adjustments to Infra contributors top-5 entry"
- Adjustments to Infra contributors top-5 entry
Now that If344bb3862d0b54a37cd28933ef1f01ad075ba31 has merged, make
non-blocking adjustments requested by reviewers there:
Move the Infra contributors entry from slot #3 to #2 since there
seems to be some perceived prioritization at work, and the Glance
situation is now reported to be less dire.
Document myself as the TC sponsor for the Infra contributors entry.
Change-Id: I50d13e02c9fa4bd1c36d61529c89efdec1865e31
openstack-gerrit
pushed a commit
that referenced
this pull request
Feb 27, 2019
* Update neutron-lib from branch 'master'
- Merge "Remove ml2's accidental dependency on l3"
- Remove ml2's accidental dependency on l3
The accidental dependency was never in effect since the neutron side of
the relevant changes was not merged yet.
I think I made a mistake in https://review.openstack.org/631515.
We added the 'router' extension as a dependency of the
'floatingip-autodelete-internal' extension. Which looks a perfectly
reasonable thing to do at first sight. However since the 'external-net'
extension was de-extensionalized and made part of the ml2 plugin, the
'floatingip-autodelete-internal' extension also had to be implemented by
the ml2 plugin. This complicated setup practically made the l3 plugin
a dependency of the ml2 plugin. (That's why unit tests started failing
in patch set #3 of the neutron change.) Which of course is non-sense.
So this change removes the dependency. The neutron side of this
change still degrades gracefully even without the explicit dependency
between the extensions, so I don't think we're losing anything by not
having that dependency.
Change-Id: I8825eaf4f46ea2639131e34f9b833af1de6ab1b4
Needed-By: https://review.openstack.org/624751
Partial-Bug: #1806032
Related-Change: https://review.openstack.org/631515
openstack-gerrit
pushed a commit
that referenced
this pull request
Mar 20, 2019
* Update nova from branch 'master'
- Merge "Add docs for compute capabilities as traits"
- Add docs for compute capabilities as traits
Change I15364d37fb7426f4eec00ca4eaf99bec50e964b6 added the
ability for the compute service to report a subset of driver
capabilities as standard COMPUTE_* traits on the compute node
resource provider.
This adds administrator documentation to the scheduler docs
about the feature and how it could be used with flavors. There
are also some rules and semantic behavior around how these traits
work so that is also documented.
Note that for cases #3 and #4 in the "Rules" section the
update_available_resource periodic task in the compute service
may add the compute-owned traits again automatically but it
depends on the [compute]/resource_provider_association_refresh
configuration option, which if set to 0 will disable that auto
refresh and a restart or SIGHUP is required. To avoid confusion
in these docs, I have opted to omit the mention of that option
and just document the action that will work regardless of
configuration which is to restart or SIGHUP the compute service.
Change-Id: Iaeec92e0b25956b0d95754ce85c68c2d82c4a7f1
openstack-gerrit
pushed a commit
that referenced
this pull request
Jan 15, 2020
* Update tripleo-common from branch 'master' - Merge "image_uploader (attempt #3): fix images upload with no labels" - image_uploader (attempt #3): fix images upload with no labels If an image has no label, we set labels to {}; so to build tag_label in that case we need to catch the TypeError exception or tag_from_label.format(**labels) will raise, since labels would be NoneType. We could have remove the default {} for Labels; but it's better to keep it for further use in the image uploader; when the parameter is required for certain methods. Closes-Bug: #1857012 Change-Id: I35d73e7eca6f3cc208eda5d4c78a7bdd6cd7b810
openstack-gerrit
pushed a commit
that referenced
this pull request
Apr 3, 2020
* Update neutron from branch 'master'
- Merge "Wait before deleting trunk bridges for DPDK vhu"
- Wait before deleting trunk bridges for DPDK vhu
DPDK vhostuser mode (DPDK/vhu) means that when an instance is powered
off the port is deleted, and when an instance is powered on a port is
created. This means a reboot is functionally a super fast
delete-then-create. Neutron trunking mode in combination with DPDK/vhu
implements a trunk bridge for each tenant, and the ports for the
instances are created as subports of that bridge. The standard way a
trunk bridge works is that when all the subports are deleted, a thread
is spawned to delete the trunk bridge, because that is an expensive and
time-consuming operation. That means that if the port in question is
the only port on the trunk on that compute node, this happens:
1. The port is deleted
2. A thread is spawned to delete the trunk
3. The port is recreated
If the trunk is deleted after #3 happens then the instance has no
networking and is inaccessible; this is the scenario that was dealt with
in a previous change [1]. But there continue to be issues with errors
"RowNotFound: Cannot find Bridge with name=tbr-XXXXXXXX-X". What is
happening in this case is that the trunk is being deleted in the middle
of the execution of #3, so that it stops existing in the middle of the
port creation logic but before the port is actually recreated.
Since this is a timing issue between two different threads it's
difficult to stamp out entirely, but I think the best way to do it is to
add a slight delay in the trunk deletion thread, just a second or two.
That will give the port time to come back online and avoid the trunk
deletion entirely.
[1] https://review.opendev.org/623275
Related-Bug: #1869244
Change-Id: I36a98fe5da85da1f3a0315dd1a470f062de6f38b
openstack-mirroring
pushed a commit
that referenced
this pull request
Jun 25, 2021
* Update openstack-tempest-skiplist from branch 'master'
to 790b7202312674fa191edbf63a055e6e90b2c36f
- Merge "Fixup tempest skip for bug/1933115"
- Fixup tempest skip for bug/1933115
See related-bug comment #3 this tweaks the skip for a failing test
Related-Bug: 1933115
Change-Id: Ia56f013220e242b9b048d38a925c17492fbd447b
openstack-mirroring
pushed a commit
that referenced
this pull request
Aug 19, 2021
* Update nova-specs from branch 'master'
to ed015789ef4c12b95e18366a98837e110ced5776
- Revert "Amend configurable-instance-hostnames to include response changes"
This reverts commit aec9a01d54ce02d24745e80cc9d9635a0eeb7048. This
change proposed renaming ``OS-EXT-SRV-ATTR:hostname`` attribute in
``/servers`` responses to match the ``hostname`` attribute in requests,
which would avoid a mismatch between the server requests and server
responses. However, this mismatch already exists for the ``host`` and
``node`` attributes of the server, which are represented as ``host`` and
``hypervisor_hostname`` in requests (from microversion 2.74) but
``OS-EXT-SRV-ATTR:host`` and ``OS-EXT-SRV-ATTR:hypervisor_hostname`` in
responses. In addition, there are many more fields with the
``OS-EXT-SRV-ATTR`` prefixes and it seems odd to remove it for one field
(and force clients to make changes) without removing the other
extension-based prefixes for both the various ``/servers`` APIs and the
``/flavors`` APIs. It would be better to tackle this in a separate
microversion or potentially not at all [1].
[1] https://etherpad.opendev.org/p/nova-api-cleanup (Item #3, line 46)
Change-Id: I4caff30f2b3cc12f0970874dcbee04e572e8ccc5
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
openstack-mirroring
pushed a commit
that referenced
this pull request
Jan 10, 2023
* Update ironic from branch 'master'
to 81e10265ce08bd525388111720b91ca10c99bb28
- Merge "Use association_proxy for ports node_uuid"
- Use association_proxy for ports node_uuid
This change adds 'node_uuid' to ironic.objects.port.Port
and adds a relationship using association_proxy in
models.Port. Using the association_proxy removes the need
to do the node lookup to populate node uuid for ports in
the api controller.
NOTE:
On port create a read is added to read the port from the
database, this ensures node_uuid is loaded and solves the
DetachedInstanceError which is otherwise raised.
Bumps Port object version to 1.11
With patch:
1. Returned 20000 ports in python 2.7768702507019043
seconds from the DB.
2. Took 0.433107852935791 seconds to iterate through
20000 port objects.
Ports table is roughly 12800000 bytes of JSON.
3. Took 5.662816762924194 seconds to return all 20000
ports via ports API call pattern.
Without patch:
1. Returned 20000 ports in python 1.0273635387420654
seconds from the DB.
2. Took 0.4772777557373047 seconds to iterate through
20000 port objects.
Ports table is roughly 12800000 bytes of JSON.
3. Took 147.8800814151764 seconds to return all 20000
ports via ports API call pattern.
Conclusion:
Test #1 plain dbapi.get_port_list() test is ~3 times
slower, but Test #3 doing the API call pattern test
is ~2500% better.
Story: 2007789
Task: 40035
Change-Id: Iff204b3056f3058f795f05dc1d240f494d60672a
openstack-mirroring
pushed a commit
that referenced
this pull request
Sep 29, 2023
* Update releases from branch 'master' to b47e488015a94b7887e4072b9c4cd9d2cb3bc257 - Merge "Add release note links for 2023.2 Bobcat (#3)" - Add release note links for 2023.2 Bobcat (#3) If any of your deliverables does not have a release note link added already under deliverables/bobcat, then please check whether there is an open patch on that repository with the topic "reno-2023.2" [1] still waiting to be approved. [1] https://review.opendev.org/q/topic:reno-2023.2+is:open Change-Id: Ie3976dd2e4e9a6e8410b57737294b6ea231fe8a6
openstack-mirroring
pushed a commit
that referenced
this pull request
Sep 26, 2024
* Update releases from branch 'master' to 5782f5d831016331e2c8f0bc947f4696de42bbc4 - Merge "Add release note links for 2024.2 Dalmatian #3" - Add release note links for 2024.2 Dalmatian #3 If any of your deliverables does not have a release note link added already under deliverables/dalmatian, then please check whether there is an open patch on that repository with the topic "reno-2024.2" [1] still waiting to be approved. [1] https://review.opendev.org/q/topic:reno-2024.2+is:open Change-Id: Ifdf3d004d71a938674818940b8050b27bd21f05b
openstack-mirroring
pushed a commit
that referenced
this pull request
Sep 29, 2025
* Update releases from branch 'master' to 5ba49c1af59d5e2244e227a221555f6d371379aa - Add release note links for 2025.2 Flamingo (#3) If any of your deliverables misses a release note link added already under deliverables/flamingo/, then please check whether there is an open patch on that repository with the topic "reno-2025.2" [1], that is still waiting for approval. [1] https://review.opendev.org/q/topic:reno-2025.2+is:open Change-Id: Ia1423e4b8cea08097bc3afb758bc36e42045df59 Signed-off-by: Előd Illés <elod.illes@est.tech>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This reverts commit d49b289.