When overriding a field defined as a function field, the field must either
create a corresponding column that is not a fields.function (if stored), or
have no corresponding column (if not stored).
Idea: look up for the model's fields in method `_setup_base()` instead of
method `__init__()`. This does not make a significant difference when
installing or upgrading modules, but when simply loading a registry, the
(expensive) field lookup is done once per model instead of once per class.
When you change the country of your company, each field of a company address keeps
its attrs. This is why the company address stays on readonly when use_parent_address
is checked.
Closes#4808
opw: 627033
In a workflow context (for instance, in the invoice workflow),
context is not passed.
Therefore, relying on the 'recompute' key being the context
in order to not recompute the fields does not work with Workflows.
It leads to huge performance issues,
as fields are recomputed recursively (instead of sequentially)
when several records are implied.
For instance, when reconciling several invoices with one payment
(100 invoices with 1 payment for instance),
records of each invoice are recomputed uselessly in each workflow call
(for each "confirm_paid" method done for each invoice).
With a significant number of invoices (100, for instance),
it even leads to a "Maximum recursion depth reached" errror.
closes#4905
This fixes a bug introduced by commit f650522bbf
(related fields should not be copied by default). Inherited fields are a
particular case, and given the implementation of copy(), they must be copied if
their original field is copied.
The test on copy() in test_orm has been modified to show the bug.
This helps fixing old-api onchange methods with a record id as a parameter.
Browsing this record id may be problematic, since it reads the record in an
environment with an empty context. This is really problematic when the record
is a new record, because such a record only exists in a given environment.
The onchange() on new records processes fields in non-predictable order. This
is problematic when onchange methods are designed to be applied after each
other. The expected order is presumed to be the one of the fields in the view.
In order to implement this behavior, the JS client invokes method onchange()
with the list of fields (in view order) instead of False. The server then uses
that order for evaluating the onchange methods.
This fixes#4897.
The mechanism to determine the table and column names of new-api many2many
fields only worked for many2many fields created from old-api many2many columns!
This fixes#4851.
This reverts commit 8cd2cc8910.
It turned out that forcing recomputing fields as user admin breaks some
existing behavior. Instead, we make the recomputation as user admin explicit
by adding compute_sudo=True in the field's definition.
The creation of a country is not something to create at flight !
The impact could be bigger that what people was expected (no accounting configured, ...).
The bad manipulation is more often the responsible, eg 'Belgium ' was creating a new country with a trailing whitespace, while the user didn't see the difference and use both country withtout making the diff.
If a selection field is created with an empty list of choices (e.g. added by
submodules), initialise the field as a varchar column (most common).
Check if the list exists to avoid crashing while checking the type of the first
key.
Fixes#3810
In currencies, advanced search with "equals" or "not equals" on rates
was not possible for something else than a date, in server format (crash).
This is because the name field of the model res.currency.rate is a date
This is now possible to search rates with just a float and the equal operators,
it returns currencies which are having at least one rate set to this float value
This is also possible to search rates with the equal operators with a date in
user (context) date format.
If the variable was existing outside the context of the ``foreach``,
the value is copied at the end of the foreach into the global context.
Fix#4461 - Q74531 - Q71486 - Q71675
When returning an HTTPException e.g. by calling ``request.not_found()``
which returns a ``werkzeug.exceptions.NotFound()``, the http system
would log a warning as HTTPException is neither a subclass of Odoo's
Response nor a subclass of werkzeug's BaseResponse.
Move the string response case about (for flow clarity), and convert
HTTPException instances to Werkzeug responses then fall into the normal
BaseResponse -> Response case to ultimately get an Odoo response object
out of the HTTPException instance.
When changing the type of a column (if size differs for example),
'selection' field should be considered like a 'char' field (since they
are internaly the same column type)
This will fix some migration issues where 'char' fields were correctly
changed but not 'selection,' field.
Use case:
* create a 6.0 db with 'stock' module installed
* 'state' field in 'stock.move' model is of type 'character varying(16)'
* migrate it to 8.0
* 'state' field is still 'character varying(16)' but should normally be
'character varying'
Acting as a mailing-list-like distribution system, the system used
to set the enveloper sender (aka bounce address) to the From header
of the message, effectively pretending to be the original sender.
In some cases the domain of the From header explicitly forbids
sending emails from external systems (e.g. with a hardfail SPF
record), so this could cause the email to be rejected by some
spam filters, depending on their policies.
The system will now use a local bounce address in the form:
mail-catchall-alias@mail-catchall-domain
or, if no catchall is configured:
postmaster-odoo@mail-catchall-domain
(as soon there is a mail.catchall.domain configured)
It will only fallback to using the From header when no
catchall domain is configured.
Fixes issue #3347.
Closes#3393.
Letting PostgreSQL low-level exceptions bubble up
ensures that the mechanism for automatically
retrying transactions will work.
In case of transient errors such as deadlocks or
serialization errors, the transaction will be
transparently retried. Previously they were
transformed into ValueError and caused a permanent
failure immediately.
The fallback to ValueError is meant for invalid
expressions or expressions that use variables not
provided in the evaluation context. Other
exception types should be preserved (this is
further improved in Odoo 8)
Saving log_handler to the config file is not currently special-cased,
the value is thus dumped as the repr() of the list, but not deserialized
with literal_eval. This means -s breaks log_handler and the
configuration file example is incorrect (it looks like a list).
Repeated -s further break the log_handler by interpreting the original
value as a string, putting it into a list, then reserialising that with
repr(), injecting a bunch of escaping backslash. The config file is soon
filled with backslashes and doubles in size with each new -s.
Furthermore for some reason the whole thing breaks --log-handler (and
aliases) entirely, once the wrong log_handler has been saved none of
them works anymore.
Fixes#4552Closes#4157
Saving multiple levels for the same logger should work with few issues,
but over time pathological (basket) cases (e.g. using ``-s`` all the
time) may build pointlessly huge lists in their config file.
Only keep the last level for each logger when saving to a config file.
For log_handler (list of logger configuration specs), having the
configuration file and the CLI configuration be exclusive (one
overwriting the other) is detrimental: it precludes keeping the
configuration file as a convenient baseline and only altering the subset
of loggers of interest at any given time. Combine specs from both
sources instead of overwriting one with the other.
* remove log_handler and its special case from the first options loop
* remove seeding of option with DEFAULT_LOG_HANDLER
* my_default is the baseline "configuration file" value, it's None if
not provided which is not convenient. Use DEFAULT_LOG_HANDLER for it
as baseline configuration file. DEFAULT_LOG_HANDLER isn't a list
anymore, it's a CSV of logger specs (same as file-serialized)
* could actually use a DEFAULT_LOG_HANDLER of ``:`` (the default logger
is root, the default level is info), but that might be a tad too
cryptic
* things are weird between my_default and the file-sourced value,
_parse_config is first run with my_defaults then with the file, but if
there's no file it's re-run with already-parsed my_defaults. So when
the file and the command-line values are of different type,
_parse_config must be ready to handle both
add_option(action=append*) always modifies the ``default`` list
in-place. When using DEFAULT_LOG_HANDLER directly, that means
log_handler is always equal to DEFAULT_LOG_HANDLER since they're the
same list object. Thus the --log-handler command-line would never
overwrite the log_handler value from the configuration file, which is
unexpected
Fix that by copying DEFAULT_LOG_HANDLER before passing it as the
option's default value.
When a stale XML ID exists in the database, `_update_dummy()`
must consider it as missing entirely, and the next call
to `_update()` will take care of cleaning up the old XML ID.
Failing to do so for `noupdate` records means the `_update`
will never happen, and as soon as another record is created
or updated with a relationship to that stale XML ID, it will
plainly crash the installation/update.
Completes/improves fd6dde7ca
Because Werkzeug uses/provides flow-control exceptions via
HTTPException (which can be used as straight responses) they are used in
a few places of the web client, when triggering some redirections for
instance.
Breaking into the debugger for such mundane situations is surprising and
inconvenient for developers trying to debug actual issues in the system,
even though HTTPExceptions are by and large not error per-se, and
shouldn't warrant triggering post-mortem debugging.
So in the non-RPC dispatcher, don't post-mortem on HTTPException either.