Setting NOT NULL constraints on required fields may fail if there are
any row with unset value for this column.
The ORM handle this case by catching the exception raised and warning
user.
However, not commiting the current transaction before, will rollback
changes made before like setting default values or renaming a column.
When recomputing stored function fields, the `write` may trigger a
cache invalidation which lead to a recompute of all the recordset
values, even the ones already saved in database.
This fixes the case where the lines of a one2many field are modified several
times by onchange methods: instead of retrieving the most recent updates, we
merge them with former updates.
This solution was written as an improvement of a proposal made by Alexis
Delattre and Sébastien Beau as #11620.
When writting a value on a translatable field in a different language than
English, the submitted *raw* value was saved in the database.
This could cause the following issues:
- empty value (provided as `False` by the web client) saved as the string
'false' in the translations table
- no encoding or sanitization convertion
- ignore size parameter on the translatable field
Process the submitted translation through symbol_set method to clean it before
storing it blindly in the database.
This allows to convert `False` into `''` for empty value and fixes#10862
In the case of a `raw_data` export, a boolean which has the value
`False` will not be exported correctly.
The fix handles the `False` value as a valid value for a field.
opw-666142
When generating SQL queries, `column._symbol_c` must be used as placeholder as
for the method 'set' of the column itself. Otherwise it is not possible to
define specialized columns.
When writing on a recordset with duplicates, the ORM raises a `MissingError`
because the rowcount gives a difference with the injected ids. The fix simply
eliminates duplicates from ids.
user_has_groups is used to check for groups in e.g. view attributes (`@groups`).
When trying to format lists of groups in views, it would break down as it would
pass e.g. `\n some.group` to `res.users.has_group`, which would look for
an xid with the module `\n some` and (oddly enough) not find it.
Theoretically could also handle that inside res.users.has_group but it seems
ever-so-slightly more risky, and has_group is only used programmatically and
should thus already be called correctly.
fixes#9797
When creating translations (click on blue flag) of a duplicated record, it used
to have the module information of the duplicated record.
It is also not useful to check the module when trying to find if a translation
exists on this record as the res_id is present in the query.
Fixes#9480
Add the possibility in the decorator to specify the `upgrade` and `downgrade`
functions that convert values between APIs. Both function have the same API:
upgrade(self, value, *args, **kwargs)
downgrade(self, value, *args, **kwargs)
The arguments ``self``, ``*args`` and ``**kwargs`` are the ones passed to the
method, following its new-API signature.
Fixes#4944, #7830.
When making on model A a read_group with groupby equal to a many2one field F1 to a model B
which is ordered by a inherited not stored field F2, the group containing all the
records from A with F1 not set was not returned.
Example:
model A= "hr.applicant"
model B= "res.users" (_order = "name,login")
inherited model= "res.partner"
field F1= "user_id"(to "res.users)
field F2= "name"(inherited from "res.partner")
In this example, the query generated by the function "read_group" was:
SELECT min(hr_applicant.id) AS id, count(hr_applicant.id) AS user_id_count , "hr_applicant"."user_id" as "user_id"
FROM "hr_applicant" LEFT JOIN "res_users" as "hr_applicant__user_id" ON ("hr_applicant"."user_id" = "hr_applicant__user_id"."id"),"res_partner" as "hr_applicant__user_id__partner_id"
WHERE ("hr_applicant"."active" = true) AND ("hr_applicant__user_id"."partner_id" = "hr_applicant__user_id__partner_id"."id")
GROUP BY "hr_applicant"."user_id","hr_applicant__user_id__partner_id"."name","hr_applicant__user_id"."login"
ORDER BY "hr_applicant__user_id__partner_id"."name" ,"hr_applicant__user_id"."login"
which always returned "hr.applicant" groups of records with a "user_id" set due to the inner join maked on res_partners.
This inner join on "res_partner" is coming from function "add_join" calling by "_inherits_join_add"
in _generate_order_by_inner.
Introduced by dac52e344c
opw:651949
If multiple warnings were returned by a cascading onchange
call, only the last warning was displayed.
This revision concatenates the warnings in such a case.
opw-649275
The risk was introduced by b7f1b9c.
IF _store_set_values() recall another _create() or _write(),
the recomputation mechanism enter in an infinite recursion
trying to reevaluate for each call exactly the same fields
for the same records than the previous one
This revision replaces the loop of _store_set_values()
by 2 nested loops:
- that not breaks the entire consumption
of recompute_old queue
(Tested thanks to revision a922d39),
- that allows to clear the queue
before each recomputations bundle fixing thereby the recursion
Closes#7558
When ordering results on a many2one fields, results are ordered by
order of the target model. The code was wrongly assuming that this
`_order` attribute only contains `_classic_read` fields (that can be
directly read from the table in database). Now correctly generate the
"ORDER BY" clause using the current table alias.
`res.users` can now be sorted by name.
Complements commit af9393d505
in light of commit 62b0d99cfe,
to really have the correct effect.
When the prefetching failed due to the presence of
extra records in the cache (for which the access is denied),
the `read` operation was indeed retried. However the
result was not stored in the cache because the cache
already held a FailedValue (automatically added when the
prefetch failed).
The new-api record prefetching algorithm attempts
to load data for all known records from the requested
model (i.e. all IDs present in the environment cache),
regardless of how indirectly/remotely they were
referenced. An indirect parent record may therefore
be prefetched along with its directly browsed children,
possibly crossing company boundaries involuntarily.
This patch implements a fallback mechanism when
the prefetching failed due to what looks like an
ACL restriction.
The implementation of `_read_from_database` handle
ACL directly and set an `AccessError` as cache value
for restricted records.
If a model (like `mail.message`) overwrites `read` to
implements its own ACL checks and raises an `AccessError`
before calling `super()` (which will then call
`_read_from_database`), the cache will be not fill,
leading to an unexpected exception.
If this commit messae looks familiar to you, that's
simply because this is the new-api counterpart of
b7865502e4
Some models (e.g. calendar.event), use "virtual" ids which are not represented
as integers. It was not possible to use sorted method on those models as calling
int() is failing.
This commit fixes the method, making it agnostic to the type of the
'id' member variable.
Fixes#7454
Pretty much completely rewritten theme with custom HTML translator and a
few parts of the old theme extracted to their own extensions.
Banner images thought not to be that huge after all, and not worth the
hassle of them living in a different repository.
co-authored with @stefanorigano
The `search` method of models has an additional keyword parameter, `count`.
It not being specified in the docs could lead people to inherit `search`
without defining it, which would result in a `TypeError` when called with
`count=`.
Closes#7451
Revert 83282f2d for a cleaner sanitizing earlier in the generation of the error
message.
If the import is failing, the error message contains the value that is
problematic. Escape this value in case it contains '%'
In commit 04ba0e99, we introduced an optimization for reading inherited fields
in a single query. There is an issue when you have more than one level of
`_inherits`. The query looks like:
SELECT ...
FROM table0, table1 AS alias1, table2 AS alias2
WHERE table0.link0 = alias1.id AND table1.link1 = alias2.id AND ...
^^^^^^
should be alias1
This fixes the issue, and adds a test to reproduce it. The fix is based on
@emiprotechnologies's own proposal, but is cleaner and does not break APIs.
The `set` method of the one2many class returns a list
of the fields that require recomputation,
the computing of the function fields being delayed
for performances reasons.
Nevertheless, if the `set` method was called
through another `set` method, in other words,
nested `set` calls, the fields to recompute returned
by the second, nested, call to set were never recomputed,
the result list were simply lost.
e.g.:
```
create/write
│set
└─── create/write with recs.env.recompute set to False
│set
└─── create
with recs.env.recompute set to False
```
To overcome this problem, the list of old api style
compute fields to recompute is stored
within the environment, and this list is cleared
each time the store_set_value has done its job of
recomputing all old api style compute fields.
opw-629650
opw-632624
closes#6053
Consider a new field that uses the same compute method as another existing
field. When the field is introduced in database, its value must be computed on
existing records. In such a case, the existing field should not be written, as
its value is not supposed to have changed. Not writing on the existing field
can avoid useless recomputations in cascade, which is the reason we introduce
this patch.
Accessing `field.digits` can crash if no environment is available at that
point. This happens in function `get_pg_type()`, which is called from method
`_auto_init()`. An environment is simply created in the method's scope to be
available for `field.digits`.
As done in write and already in next version (see 0fd773a), accessing a deleted
record (through read or check access rights) should always return a MissingError
instead of the generic except_orm.
This will allow code ignoring deleted record (e.g. 'recompute' method) to safely
process the other records.
Fixes#6105
A cross-registry cache was introduced by e2ea691ce.
The initial idea was praiseworthy but sub-optimal for servers with a
lot of registries.
When there is lot of registry loaded, the cache size was huge and
clearing some entries took a lot of CPU time, increasing the chances
of timeout.
Also, the cache was not cleaned when a registry is removed from
registry LRU (this operation would also consume time).
The following case has shown the issue: extend the model `res.company` by
adding at least two fields F and G, where F has a default value defined as:
lambda self: self.env.user.company_id.name
If the column F is created before G in the database, the existing records will
be filled with the default value of F. When the default value is computed, the
field `name` from a `res.company` is read, and other fields are prefetched,
including G. This operation fails, because G does not exist in database yet!
The optimization consists in using tuples for attributes `inverse_fields`,
`computed_fields` and `_triggers`, and to let them share their value when it is
empty, which is common. This saves around 1.8Mb per registry.
The computed value of parameter digits is no longer stored into fields and
columns; instead the value is recomputed everytime it is needed. Note that
performance is not an issue, since the method `get_precision` of model
'decimal.precision' is cached by the orm. This simplifies the management of
digits on fields and saves about 300Kb per registry.
Sometimes, the expected mro of the model is not the same as the one built with
a binary class hierarchy. So we reorder the base classes in order to match the
equivalent binary class hierarchy. This also fixes the cases where duplicates
appear in base classes.
Instead of composing classes in a binary tree hierarchy, we make one class that
inherits from all its base classes. This avoids keeping intermediate data
structures for columns, inherits, constraints, etc. This saves about 600Kb per
registry.
The mappings model._all_columns takes quite some memory (around 2MB per
registry) because of the numerous dictionaries (one per model) and inefficient
memory storage of column_info. Since it is deprecated and almost no longer
used, it can be computed on demand.