Skip to content

Optimize the limits inside the VM #202

@shargon

Description

@shargon

Currently we have a lof of limits in the VM, some of them require in every step recompute the count for ensure that fits the expected values. I think that is too expensive and sometimes could be unfair. For example: One array of 1025 items is not allowed, but two of 1024 yes.

I don't have now the perfect solution but I think that we should improve this.

This method is very expensive, but with our current module is needed:

private bool CheckStackSize(bool strict, int count = 1)
{
is_stackitem_count_strict &= strict;
stackitem_count += count;
if (stackitem_count < 0) stackitem_count = int.MaxValue;
if (stackitem_count <= MaxStackSize) return true;
if (is_stackitem_count_strict) return false;
// Deep inspect
stackitem_count = GetItemCount(InvocationStack.Select(p => p.EvaluationStack).Distinct().Concat(InvocationStack.Select(p => p.AltStack).Distinct()).SelectMany(p => p));
if (stackitem_count > MaxStackSize) return false;
is_stackitem_count_strict = true;
return true;

My proposal is something like this:

class Memory
{
public int Max;
public int Current;
public StackItem CreateInteger(int value)
{
  var ret=new StackItem(value);
  Current+=ret.Size;
  return ret;
}
public void Clean(StackItem i){ Current-=i.Size; }
}
class ApplicationEngine
{
Memory mem;
public ApplicationEngine(Memory mem){ ... }
}

If we centralize the creation of the StackItems in only one point, we will have a control of the current memory without need to recompute in every steps.

Is only an idea, i really know that this great minds @neo-project/ngd-shanghai will have a good ones.

But for me the goal is to have only one limit: 16mb of RAM (for example).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions